Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:12 yinyin joined #gluster
01:36 harish joined #gluster
01:39 mjrosenb joined #gluster
01:41 ThatGraemeGuy joined #gluster
01:54 _pol joined #gluster
02:16 jebba joined #gluster
02:16 sprachgenerator joined #gluster
03:18 badone joined #gluster
05:58 pea_brain joined #gluster
06:27 atrius joined #gluster
06:47 ekuric joined #gluster
06:50 satheesh1 joined #gluster
08:02 ricky-ticky joined #gluster
08:52 a2 joined #gluster
08:52 y4m4 joined #gluster
09:05 social any idea what is this trying to tell me? I [dict.c:370:dict_get] (-->/usr/lib64/glusterfs/3.4.0/xlator/pe​rformance/md-cache.so(mdc_lookup+0x2f8) [0x7fcfadfb7078] (-->/usr/lib64/glusterfs/3.4.0/xlator/deb​ug/io-stats.so(io_stats_lookup_cbk+0x113) [0x7fcfadd9f1e3] (-->/usr/lib64/glusterfs/3.4.0/xlator/syste​m/posix-acl.so(posix_acl_lookup_cbk+0x233) [0x7fcfadb91193]))) 0-dict: !this || key=system.posix_acl_default
09:45 ThatGraemeGuy joined #gluster
09:48 pea_brain joined #gluster
10:12 ujjain joined #gluster
10:18 minoritystorm joined #gluster
10:18 minoritystorm any gluster core dev here ?
10:19 minoritystorm need to ask a question regarding gluster / fuse interatcions and/or limitations
11:18 minoritystorm fine.. till a gluster developer shows up.. I have a single brick gluster 3.4 deployment over SSD storage and I'm getting maximum of 60 MB/s of write speed.. please note that only one node is active so it initially looks like a fuse limitation for me.. if I try to run another write process on the same gluster volume at the same time, each of the 2 writes are limited to ~20 to ~25 MB/s
11:18 minoritystorm I am suspecting fuse.. any other thoughts ?
11:20 minoritystorm normal disk write speed (direct brick write speed) is ~800 MB/s.. using an XFS filesystem
11:57 satheesh joined #gluster
12:47 recidive joined #gluster
12:48 bala joined #gluster
12:54 satheesh1 joined #gluster
13:26 mattf joined #gluster
13:27 satheesh joined #gluster
13:27 mattf joined #gluster
13:29 lalatenduM joined #gluster
13:49 satheesh1 joined #gluster
14:26 rotbeard joined #gluster
14:27 rotbeard left #gluster
14:27 satheesh joined #gluster
15:09 ultrabizweb joined #gluster
15:29 awheeler joined #gluster
15:29 social hi
15:29 glusterbot social: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:29 social seems like we hit heavy fdleak on 3.4.0
15:29 social less /mnt/tmp/something_wrong_with_gluster
15:29 social grep deleted /mnt/tmp/something_wrong_with_gluster | wc -l
15:29 social 814867
15:33 ultrabizweb joined #gluster
15:37 awheeler joined #gluster
15:41 awheele__ joined #gluster
15:41 darinschmidt joined #gluster
15:47 darinschmidt hello there. i have some questions. I dont think i fully understand what glusterfs is. Is gluster some kind of file system that enables you to make storage redundant over the network? Such as make your entire farm of storage servers appear as 1?
15:49 iksik_ joined #gluster
15:49 iksik_ hello
15:49 glusterbot iksik_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:51 iksik_ i'm trying to compile glusterfs-3.4.0 under freebsd 9.1 (with up to date system, ports tree and installed packages + gluster dependencies - all i think), and the problem i have is an error during compilation: http://pastebin.com/iJb2HjdF - this is my first time with glustefs, so it is highly possible that i'm missing something very obvious ;s
15:51 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
15:52 iksik_ https://gist.github.com/krzyszt​ofantczak/f07dfaaf26ddd14f4f08
15:52 glusterbot <http://goo.gl/c2onFu> (at gist.github.com)
15:52 iksik_ ;-)
15:53 iksik_ https://bugzilla.redhat.com/show_bu​g.cgi?format=multiple&amp;id=893795 - this is the thread i have googled when looking for some solution, it seems to be the same problem
15:53 glusterbot <http://goo.gl/PBJRFQ> (at bugzilla.redhat.com)
15:53 awheeler joined #gluster
16:26 atrius joined #gluster
16:33 awheeler joined #gluster
16:40 awheeler joined #gluster
16:50 lalatenduM joined #gluster
17:34 edward1 joined #gluster
17:34 edward1 joined #gluster
17:38 chirino joined #gluster
17:40 pea_brain joined #gluster
17:46 chirino joined #gluster
18:01 duerF joined #gluster
18:25 ricky-ticky joined #gluster
18:43 awheeler joined #gluster
19:13 duerF joined #gluster
20:14 dhsmith joined #gluster
20:19 edward2 joined #gluster
21:23 chirino joined #gluster
21:31 MugginsM joined #gluster
21:33 chirino joined #gluster
21:40 chirino joined #gluster
22:31 Technicool joined #gluster
22:37 minoritystorm joined #gluster
22:43 joshit_ joined #gluster
22:43 joshit_ JoeJulian, around this morning?
22:50 minoritystorm I have a single brick gluster 3.4 deployment over SSD storage and I'm getting maximum of 60 MB/s of write speed.. please note that only one node is active so it initially looks like a fuse limitation for me.. if I try to run another write process on the same gluster volume at the same time, each of the 2 writes are limited to ~20 to ~25 MB/s
22:50 minoritystorm I am suspecting fuse.. any other thoughts ?
22:51 minoritystorm normal disk write speed (direct brick write speed) is ~800 MB/s.. using an XFS filesystem
22:53 joshit_ yes but gluster also operates over a network
22:54 joshit_ and networks can also have bottlenecks so maybe its not a fuse limitation?
22:57 minoritystorm network is a 10gig.. communication between any 2 nodes is averaged about 7gig/s
22:57 minoritystorm but.. how is it supposed to operate over network while yet its writing to a local brick !?
23:00 chirino joined #gluster
23:02 joshit_ have you got a standard config?
23:02 joshit_ or have you tweaked it?
23:04 minoritystorm nothing tweaked.. just a normal replicated setup
23:05 minoritystorm over 2 nodes however disabled one of the two nodes and testing locally on the enabled node
23:07 jebba joined #gluster
23:09 joshit_ do copy tests, do scp tests
23:09 joshit_ try and isolate the prob
23:11 minoritystorm wel.. so far.. its either glusterfsd or glusterfs or probably fuse.. any of these 3
23:14 minoritystorm and still.. I can't digest the idea that I should suspect the network.. I can run the same tests without the network cable on.. no?
23:26 chirino joined #gluster
23:28 darinschmidt is glusterfs a file system like zfs except gluster is capable of datamanagement over the network or something?
23:31 minoritystorm darinschmidt, nop.. gluster requires a lower layer fs like ext or xfs or probably zfs
23:31 darinschmidt ok so zfs layer then gluster on top?
23:31 darinschmidt and that manages the data over the network or something. for some reason im not quite grasping what gluster does
23:32 fidevo joined #gluster
23:33 darinschmidt sorry for being so noobish
23:38 joshit_ zfs is just the filesystem
23:38 joshit_ gluster is program that sits on top and syncs the bricks
23:38 darinschmidt ah ok, thats what i was thinking but i wasnt quite sure i was putting the pieces together
23:39 joshit_ e.g /mnt/data = brick one server, another server /mnt/data = brick
23:40 joshit_ e.g. /dev/disk/by/id blah blah mounted to /mnt/data
23:41 darinschmidt with gluster installed, do you still use zfs to create datasets or does gluster take over from there?
23:41 joshit_ but then you gluster would glusterfs mount ip:/mnt/data/blah
23:41 darinschmidt ah, gotcha, ok
23:41 darinschmidt making sense now
23:41 joshit_ zfs does fs stuff
23:41 joshit_ gluster does gluster stuff
23:41 joshit_ 2 diff systems
23:42 joshit_ you can mix and match
23:42 joshit_ but they both do their own type of jobs
23:43 darinschmidt ok because im trying to find a scalable solution to addig storage servers to my home network but wanted the servers to appear as one databank. kinda like a raid of servers and i dont think that zfs is capable of doing that
23:44 joshit_ zfs is a big pool
23:44 joshit_ think of it as that
23:44 joshit_ one big pool that you can easily take snapshots
23:44 darinschmidt yeah bt the storage in zfs has to be local storage right?
23:44 joshit_ read on it as im still new
23:44 joshit_ in that area
23:45 chirino joined #gluster
23:46 darinschmidt going to have to dive more into their manuals
23:47 minoritystorm I have this WEIRD behavior.. I run something like `while true;do dd if=/dev/zero of=test bs=1024 count=1024;rm -f test; done` over one of the glusterfs replicated volume and I get around ~4.7 MB/s, however if I start another while loop but with a different name over the same replicated volume I get both loops with a speed limit of ~1.8 MB/s
23:49 minoritystorm so again I suspect like if there're some sort of global FUSE limitation
23:51 darinschmidt where is the gluser man page online? or is three one? i cant seem to locate it in the documents section. i want to see what commands are available and what they are for

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary