Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 B21956 left #gluster
00:39 huleboer joined #gluster
00:47 chirino joined #gluster
00:52 plarsen joined #gluster
01:10 XpineX joined #gluster
01:32 MugginsM joined #gluster
01:37 bharata-rao joined #gluster
01:50 huleboer joined #gluster
02:01 haomaiwa_ joined #gluster
02:21 calisto joined #gluster
02:29 cleo_ joined #gluster
02:31 cleo_ before the gluster ver.3.1, was glusterfs as popular as now?
02:33 cleo_ i heard that elastic hash algorithm was applied since gluster ver3.1
02:34 cleo_ i really don't get differences.
02:47 badone joined #gluster
02:48 bala joined #gluster
02:48 wgao joined #gluster
02:59 meghanam joined #gluster
02:59 meghanam_ joined #gluster
03:32 hagarth joined #gluster
03:34 kshlm joined #gluster
03:55 itisravi joined #gluster
03:56 kanagaraj joined #gluster
03:58 shubhendu joined #gluster
03:58 RameshN joined #gluster
04:06 feeshon joined #gluster
04:07 dusmant joined #gluster
04:10 harish joined #gluster
04:17 meghanam joined #gluster
04:17 meghanam_ joined #gluster
04:25 atinmu joined #gluster
04:28 jiffin joined #gluster
04:36 ArminderS joined #gluster
04:37 ArminderS- joined #gluster
04:38 SOLDIERz_ joined #gluster
04:40 soumya|afk joined #gluster
04:40 nbalachandran joined #gluster
04:41 rafi1 joined #gluster
04:46 plarsen joined #gluster
05:19 anil joined #gluster
05:20 sahina joined #gluster
05:21 meghanam joined #gluster
05:22 meghanam_ joined #gluster
05:24 prasanth_ joined #gluster
05:25 spandit joined #gluster
05:26 kdhananjay joined #gluster
05:30 atalur joined #gluster
05:36 Humble joined #gluster
05:39 ppai joined #gluster
05:39 dusmant joined #gluster
05:40 topshare joined #gluster
05:42 anoopcs joined #gluster
05:43 overclk joined #gluster
05:44 maveric_amitc_ joined #gluster
05:48 rjoseph joined #gluster
05:49 saurabh joined #gluster
05:52 ramteid joined #gluster
05:52 XpineX joined #gluster
05:55 jiffin joined #gluster
05:55 atinmu joined #gluster
05:57 kumar joined #gluster
06:03 smohan joined #gluster
06:18 ekuric joined #gluster
06:18 dusmant joined #gluster
06:19 sahina joined #gluster
06:20 shubhendu joined #gluster
06:20 anil joined #gluster
06:25 soumya|afk joined #gluster
06:32 deepakcs joined #gluster
06:40 ppai joined #gluster
06:44 Fetch joined #gluster
06:47 ArminderS joined #gluster
06:49 atinmu joined #gluster
06:58 hagarth joined #gluster
07:03 ctria joined #gluster
07:14 shubhendu joined #gluster
07:18 nshaikh joined #gluster
07:23 sahina joined #gluster
07:28 poornima joined #gluster
07:28 raghu` joined #gluster
07:41 anil joined #gluster
08:00 SOLDIERz_ joined #gluster
08:02 atalur joined #gluster
08:03 ArminderS joined #gluster
08:04 AaronGr joined #gluster
08:07 necrogami joined #gluster
08:07 ArminderS joined #gluster
08:07 Philambdo joined #gluster
08:08 dusmant joined #gluster
08:09 elico joined #gluster
08:11 poornima joined #gluster
08:14 SOLDIERz_ joined #gluster
08:15 [Enrico] joined #gluster
08:21 masterzen joined #gluster
08:22 harish joined #gluster
08:30 dusmant joined #gluster
08:37 smohan joined #gluster
08:45 warci joined #gluster
08:46 vimal joined #gluster
08:56 LebedevRI joined #gluster
08:58 ctria joined #gluster
09:02 harish joined #gluster
09:08 topshare joined #gluster
09:12 liquidat joined #gluster
09:12 anil joined #gluster
09:19 SOLDIERz joined #gluster
09:21 poornima joined #gluster
09:22 atalur joined #gluster
09:24 rgustafs joined #gluster
09:25 aravindavk joined #gluster
09:28 glusterbot News from newglusterbugs: [Bug 1169701] cann't start nfs process when " /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs.." is running <https://bugzilla.redhat.com/show_bug.cgi?id=1169701>
09:28 glusterbot News from newglusterbugs: [Bug 1169707] `gluster peer probe hostname` fails when hostname contains '_' <https://bugzilla.redhat.com/show_bug.cgi?id=1169707>
09:30 dusmant joined #gluster
09:50 sahina joined #gluster
09:59 Norky joined #gluster
10:05 rjoseph joined #gluster
10:05 T0aD joined #gluster
10:07 feeshon joined #gluster
10:17 jvandewege_ Goodmorning, I'm doing some performance testing and I'm following the blogpost on gluster.org of ndevos but getting an error:  volume  create: fast: failed: Another transaction is in progress. Please try again after sometime
10:18 jvandewege_ Anyone know where this comes from? Got some volumes already running on this system, gluster-3.6.1
10:19 gildub joined #gluster
10:25 jvandewege_ hmmm, service glusterd restart fixed this. Weird
10:26 hagarth jvandewege_: that is usually seen because of a stale in-memory lock held by glusterd
10:26 hagarth so a restart would have fixed the problem
10:26 jvandewege_ hagarth: first time I have seen this.
10:27 sahina joined #gluster
10:28 dusmant joined #gluster
10:37 rjoseph joined #gluster
10:39 soumya_ joined #gluster
10:45 morse joined #gluster
10:49 elico joined #gluster
10:55 SOLDIERz joined #gluster
10:58 shylesh__ joined #gluster
10:59 harish joined #gluster
11:06 harish joined #gluster
11:07 mator joined #gluster
11:09 harish joined #gluster
11:11 SOLDIERz_ joined #gluster
11:19 kkeithley1 joined #gluster
11:25 ndevos REMINDER: in ~30 minutes the Gluster Community Bug Triage meeting starts in #gluster-meeting
11:42 Humble_pto joined #gluster
11:48 diegows joined #gluster
11:55 bene joined #gluster
11:58 calisto joined #gluster
11:59 atalur joined #gluster
11:59 XpineX joined #gluster
11:59 meghanam joined #gluster
11:59 jdarcy joined #gluster
11:59 meghanam_ joined #gluster
12:00 ndevos REMINDER: the Gluster Community Bug Triage meeting starts now in #gluster-meeting
12:10 hagarth joined #gluster
12:13 mojibake joined #gluster
12:19 meghanam joined #gluster
12:19 meghanam_ joined #gluster
12:21 itisravi_ joined #gluster
12:26 itisravi joined #gluster
12:30 nishanth joined #gluster
12:30 glusterbot News from newglusterbugs: [Bug 1168574] Partition disappearing <https://bugzilla.redhat.com/show_bug.cgi?id=1168574>
12:31 smohan_ joined #gluster
12:32 kanagaraj joined #gluster
12:38 mojibake joined #gluster
12:39 sahina joined #gluster
12:42 [Enrico] joined #gluster
12:46 vimal joined #gluster
12:50 harish joined #gluster
12:55 edward1 joined #gluster
12:58 T0aD joined #gluster
12:59 calisto1 joined #gluster
12:59 overclk joined #gluster
13:01 glusterbot News from newglusterbugs: [Bug 1169784] `gluster peer probe hostname` fails when hostname contains '_' <https://bugzilla.redhat.com/show_bug.cgi?id=1169784>
13:01 glusterbot News from resolvedglusterbugs: [Bug 1169701] cann't start nfs process when " /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs.." is running <https://bugzilla.redhat.com/show_bug.cgi?id=1169701>
13:01 overclk hagarth, ping, joining hangout?
13:02 hagarth overclk: getting on to the hangout
13:03 hagarth overclk: having troubles getting on :-/
13:04 ndevos ... The live video broadcast will begin soon.
13:04 hagarth this is the URL right - https://plus.google.com/u/1/events/cfrka4gmr67s131lfj5o37pje4o
13:04 anoopcs1 joined #gluster
13:04 overclk hagarth, yep
13:04 overclk ndevos, same here..
13:05 hagarth overclk: why don't you schedule one now and get started with that?
13:06 hagarth overclk: looks like davemc needs to press a few buttons for the scheduled hangout to start :)
13:07 overclk hagarth, yep. i'll schedule it now..
13:09 John_HPC joined #gluster
13:11 overclk hagarth, https://plus.google.com/hangouts/_/hoaevent/AP36tYe1ZKJJfdyagrQrux99YWq4lsOpkhJXQs4YjlwkpHpfkx8PFQ
13:14 ppai joined #gluster
13:15 shubhendu joined #gluster
13:18 hagarth bitrot hangout happening at - https://plus.google.com/hangouts/_/hoaevent/AP36tYe1ZKJJfdyagrQrux99YWq4lsOpkhJXQs4YjlwkpHpfkx8PFQ
13:23 SOLDIERz_ joined #gluster
13:34 anoopcs joined #gluster
13:36 anoopcs joined #gluster
13:38 topshare joined #gluster
13:39 topshare joined #gluster
13:41 _Bryan_ joined #gluster
13:42 Alphamax joined #gluster
13:46 aravindavk joined #gluster
13:48 calisto joined #gluster
13:53 coredump joined #gluster
13:58 B21956 joined #gluster
13:59 smohan joined #gluster
14:03 ctria joined #gluster
14:07 virusuy joined #gluster
14:07 virusuy joined #gluster
14:07 poornima joined #gluster
14:08 ira joined #gluster
14:12 dusmant joined #gluster
14:13 Alphamax Hi, I use gluster 3.6.1 with a volume on 5 replicas and have "lock" problems. What can i look to solve the problem ? on some servers i got "Unlocking failed on ccsvli79. Please check log file for details." but can't find interesting information on this host. Can anyone help me ?
14:16 julim joined #gluster
14:17 plarsen joined #gluster
14:18 nbalachandran joined #gluster
14:19 spandit joined #gluster
14:20 tdasilva joined #gluster
14:20 coredump So any idea if the "xlator does not implement release_cbk" are related to the permission denied errors I get? I get the xlator error at the same time a "permission denied" error to write a file.
14:24 calisto joined #gluster
14:27 plarsen joined #gluster
14:27 glusterbot` joined #gluster
14:28 failshell joined #gluster
14:29 Slashman joined #gluster
14:29 harish joined #gluster
14:36 davemc joined #gluster
14:37 soumya joined #gluster
14:43 Fen1 joined #gluster
14:46 atalur joined #gluster
14:55 ctrianta|afk joined #gluster
14:57 harish joined #gluster
15:00 aravindavk joined #gluster
15:04 kovshenin joined #gluster
15:05 ArminderS joined #gluster
15:09 ArminderS left #gluster
15:10 skippy Alphamax: I'm seeing locking problems on 3.6.1 as well.  We have replica 2.
15:13 kovsheni_ joined #gluster
15:14 Alphamax skippy: ok and no clue for the moment ?
15:17 skippy no clues
15:18 skippy Alphamax: http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019621.html
15:20 Alphamax skippy: i saw that thread but there is no news since Nov. 25 :(
15:21 skippy nope :(
15:26 jiku joined #gluster
15:30 bala joined #gluster
15:32 kovshenin joined #gluster
15:34 Bosse joined #gluster
15:40 delhage joined #gluster
15:41 atalur joined #gluster
15:44 _dist joined #gluster
15:46 RameshN joined #gluster
15:49 bala joined #gluster
15:51 bene joined #gluster
16:00 nshaikh joined #gluster
16:01 soumya joined #gluster
16:11 _Bryan_ joined #gluster
16:12 Telsin joined #gluster
16:14 RameshN joined #gluster
16:16 Telsin left #gluster
16:21 XpineX joined #gluster
16:23 bennyturns joined #gluster
16:26 lmickh joined #gluster
16:30 jdarcy joined #gluster
16:31 jdarcy joined #gluster
16:36 DV joined #gluster
16:39 PhoenixSTF joined #gluster
16:40 PhoenixSTF hey guys I have a bit of an issue, I am using replication between 2 nodes over a dedicated Gbit but writing speed maxes out @35-40 Mb/s
16:41 PhoenixSTF I am using for qcow2 and raw KVM images
16:42 PhoenixSTF is this a normal or am I doing something wrong to get these slow speeds?
16:44 prasanth_ joined #gluster
16:59 siel joined #gluster
17:04 JoeJulian PhoenixSTF: Several variables in that. Latency. Are you mounting the volume with FUSE? Are you "nodes" servers? clients? both? Where and how are you measuring that performance?
17:06 bene2 joined #gluster
17:06 elico joined #gluster
17:08 RameshN joined #gluster
17:10 daMaestro joined #gluster
17:11 PhoenixSTF JoeJulian: I tries mounting with NFS and with KVM/libvirt bundle network storage, dedicated servers with dedicated GB link, network card performance, iperf 967Mbps/s, mount performance dd if=/dev/zero of=test bs=1024 count=1048576 and KVM virtual machine instalation
17:12 rafi1 joined #gluster
17:17 semiosis PhoenixSTF: try bs=1M
17:18 semiosis @dd
17:18 glusterbot semiosis: If you're testing with dd and seeing slow results, it's probably because you're not filling your tcp packets. Make sure you use a large block size. Further, dd isn't going to tell you how your cluster will perform with simultaneous clients or how a real load will perform. Try testing what you really want to have happen.
17:18 JoeJulian A 1k block size doesn't even fill up a TCP packet.
17:18 semiosis <3 those canned rants
17:18 JoeJulian semiosis++
17:18 glusterbot JoeJulian: semiosis's karma is now 2000006
17:21 semiosis PhoenixSTF: probably want to drop count down to 1000 with bs=1M
17:21 semiosis or even 100
17:22 jdarcy Better yet, use iozone or fio with multiple threads.
17:22 PhoenixSTF semiosis: JoeJulian: DARN!!! ok it's for qcow2 and raw images, what is the best setup for this, and sorry and ty
17:22 PhoenixSTF yes your right 1M just pushed the bullet
17:25 PhoenixSTF so for performance I have to fill the tcp packets, can I lower the packet size?
17:26 JoeJulian @lucky TCP
17:26 glusterbot JoeJulian: http://en.wikipedia.org/wiki/Transmission_Control_Protocol
17:26 jmarley joined #gluster
17:27 meghanam joined #gluster
17:27 meghanam_ joined #gluster
17:28 JoeJulian There's a lot of good information about TCP and how it works. The smaller your packet, the greater percentage of the TCP packet is header. Each packet has a round trip time, so smaller packets exaggerate that. etc.
17:29 jdarcy Blech.  The rebalance volfile will always be generated with whatever transport happened to be last when creating the regular volfiles.
17:30 PhoenixSTF JoeJulian: ok thanks
17:30 JoeJulian If that's in reference to the 4.0 design, does that matter? You only need to rebalance the subvolume, don't you?
17:30 semiosis jdarcy: welcome back
17:30 jdarcy semiosis: Heh.  I need somewhere to whine.   ;)
17:32 jdarcy JoeJulian: That's in current code, means there's a 50/50 chance of choosing the transport/network that the user would have wanted (e.g. because it's faster).
17:33 JoeJulian Oh, I see.
17:34 JoeJulian PhoenixSTF: btw, you really want to be using ,,(libgfapi) for qemu.
17:34 glusterbot PhoenixSTF: I do not know about 'libgfapi', but I do know about these similar topics: 'qemu-libgfapi'
17:34 JoeJulian @qemu-libgfapi
17:34 glusterbot JoeJulian: http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/
17:34 PhoenixSTF JoeJulian: yes I have it already it is mounted and working
17:39 nbalachandran joined #gluster
17:40 nbalacha joined #gluster
17:41 nbalacha joined #gluster
17:44 ildefonso joined #gluster
17:46 rafi1 joined #gluster
17:51 siel joined #gluster
17:51 siel joined #gluster
18:03 elico joined #gluster
18:03 ira joined #gluster
18:08 jobewan joined #gluster
18:13 coredump joined #gluster
18:20 PhoenixSTF left #gluster
18:37 MacWinner joined #gluster
18:40 PeterA joined #gluster
18:51 coredump what are the most obvious things I can tune to make write performance better?
18:51 coredump aside than faster disk/network :P
19:00 John_HPC coredump: there are a lot of variables you can set on your gluster
19:00 John_HPC Take a look at, http://www.slideshare.net/Gluster/gluster-for-geeks-performance-tuning-tips-tricks
19:01 John_HPC http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options
19:04 JoeJulian coredump: "gluster volume set help" will list all the things you can play with. Change one, test, reset that change and repeat. Publish your results. In my very limited tests, I found little to no differences in my use cases changing any of them.
19:05 Telsin joined #gluster
19:06 JoeJulian coredump: deadline scheduler for your disks. Faster ram/cpu for context switches. Lower latency network. Cache writes to fill packet mtu.
19:10 coredump volume set: failed: '65535' in 'option cache-size 65535' is out of range [524288 - 1073741824]
19:10 coredump erm
19:12 ricky-ticky joined #gluster
19:18 coredump I want less cpu usage, but I guess that's not a thing on a many small file write environment
19:20 coredump I also have a question. When I mount a gluster volume, let's say in 4 clients. All of them are using the same IP to mount, does that mean that all clients are writing to that server? (my understanding is taht they will write to different servers on the distrubuted volume, but I see higher usage on the server that all clients mount)
19:20 skippy are you using the FUSE glusterfs?
19:20 coredump yes
19:21 coredump but nvm, I found the answer on the docs
19:21 skippy then clients all talk directly to the servers.
19:21 JoeJulian "the" answer?
19:21 JoeJulian Which one was that?
19:21 JoeJulian oh, nevermind.
19:21 skippy the server used on the mount command is just to get the volfile.  thereafter, glusterfs talks directly to servers as needed.
19:21 coredump Note: The server specified in the mount command is only used to fetch the volfile describing the named volume. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount).
19:21 JoeJulian I'll go back into my own little world.
19:22 coredump strange why one server sees more load than the others.
19:22 coredump hmm
19:22 JoeJulian It's probably still the first to respond.
19:23 JoeJulian If you want to spread the load, look at cluster.read-hash-mode
19:23 coredump http://f.cl.ly/items/0T1K3S1V3j2w3r2v3x0W/Screen%20Shot%202014-12-02%20at%202.22.36%20PM.png
19:26 mojibake left #gluster
19:31 XpineX joined #gluster
19:42 siel joined #gluster
19:43 semiosis argh.  i can't seem to attach more than 16 ebs vols to this ec2 instance :(
19:46 tdasilva joined #gluster
19:49 B21956 joined #gluster
19:49 anil joined #gluster
20:00 tom[] i have a question about commands for creating a gluter volume. i put it in this gist: https://gist.github.com/tom--/cbb0a675e28584a07e3b
20:00 glusterbot tom[]: https://gist.github.com/tom's karma is now -4
20:00 * tom[] never understood glusterbot's issue with my use of gists
20:03 drankis joined #gluster
20:03 jmarley joined #gluster
20:04 tessier_ joined #gluster
20:04 tessier_ Wow, a gluster channel. Hello all! :)
20:05 tessier_ Anyone using Gluster with Xen as redundant storage? I've been doing it the hard way for years and I'm wondering if Gluster might be the solution.
20:05 jcsp joined #gluster
20:07 JoeJulian Most use it with qemu, but I know some people use xen.
20:08 JoeJulian s/qemu/qemu-kvm/
20:08 glusterbot What JoeJulian meant to say was: An error has occurred and has been logged. Check the logs for more informations.
20:08 * JoeJulian whacks glusterbot
20:11 tessier_ JoeJulian: I'm having trouble conceptualizing just how gluster works and the FAQs and getting started guide on gluster.org aren't helping. http://www.gluster.org/documentation/Getting_started_overview/ says no structured data like SQL databases in gluster but http://blog.gluster.org/2013/11/a-gluster-block-interface-performance-and-configuration/ says you can use it with iscsi which you could presumably put an SQL database like MySQL in. I'm confu
20:12 JoeJulian I've had great success with innodb backed mariadb.
20:13 Slasheri joined #gluster
20:13 Slasheri joined #gluster
20:13 JoeJulian I use DHT to essentially shard my data by using multiple innodb files that are placed on different servers using dht.
20:14 Intensity joined #gluster
20:15 tessier_ JoeJulian: That's good news. I basically want redundant SAN storage behind my Xen VMs. Right now I do it in a complicated way: I create a logical volume on each of my two physical pieces of hardware. Then I export that with tgtd to my Xen server. Then the xen server hands the two block devices from iscsi to the VM which does software RAID inside the VM.
20:16 tessier_ This is a problem in a number of ways. First it's a lot of steps and complicated. Adding/managing storage is a real pain and error prone and a mistake can cause big problems. The md raid inside the VM isn't always reliable and does weird thigns sometimes.
20:19 tessier_ I can't change the way our app works to use DHT or store things in separate innodb files etc. That's a long story.
20:20 tessier_ Can gluster make this better?
20:23 JoeJulian Sounds like something I would test in the same situation. I specifically chose gluster over the configuration you're describing to meet my own requirements.
20:24 tessier_ I think I'm starting to grok this. How does gluster compare with drbd? It is sounding similar.
20:25 semiosis tom[]: glusterbot is going senile in old age.  to your question though, i'm pretty sure you can't create a volume without any bricks.  furthermore, your add-brick commands would increase distribution as they are.  you'd need to use add-brick replica 2, then add-brick replica 3, after the initial create with one brick
20:25 tessier_ How long has gluster been production-ready? I'm starting to fear I've wasted far too much time on my current setup.
20:25 semiosis tom[]: actually, the add-bricks are not valid, now that i see a bare replica word
20:26 JoeJulian tessier_: probably about 4 years, maybe 5 depending on skill level and use case.
20:26 tessier_ JoeJulian: Oh, ok. Then I don't feel so bad. My current setup began 6 years ago.
20:26 semiosis depends what you have in mind by production
20:26 tessier_ But it's definitely time to move on.
20:27 tessier_ So can I use a file in gluster as a disk image to point my Xen VM at? It sounds like it.
20:27 JoeJulian tom[]: It's my really lazy use of regex in checking for word-- or word++ that causes that.
20:27 glusterbot JoeJulian: word's karma is now 1
20:27 glusterbot JoeJulian: word's karma is now 0
20:27 JoeJulian hehe
20:27 tessier_ I may even be able to avoid iscsi altogether and just go with gluster if that is the case. That would be ideal.
20:28 JoeJulian Yep.
20:28 tom[] damn, glusterbot, you not contributing producively here!
20:29 tom[] semiosis: thanks. would you take another look at the gist? https://gist.github.com/tom--/cbb0a675e28584a07e3b
20:29 glusterbot tom[]: https://gist.github.com/tom's karma is now -5
20:30 tom[] glusterbot: you know that's not a vlid nick, yeah?
20:30 semiosis JoeJulian: word++--
20:30 glusterbot semiosis: word's karma is now 1
20:30 glusterbot semiosis: word++'s karma is now -1
20:31 semiosis s/word++--/An error has occurred and has been logged. Check the logs for more informations./
20:31 glusterbot semiosis: s/word's karma is now 1
20:31 glusterbot semiosis: s/word++'s karma is now -1
20:31 glusterbot semiosis: Error: u'/^(?!s([^A-Za-z0-9\\}\\]\\)\\>\\{\\[\\(\\<\\\\]).+\\1.+[ig]*).*word++--.*/' is not a valid regular expression.
20:31 semiosis rofl
20:31 JoeJulian You can also just make an empty list, append your bricks, then " ".join(foo)
20:31 JoeJulian And you're just evil, semiosis
20:32 semiosis tom[]: your create command is still invalid (no bricks) as i mentioned earlier
20:32 tom[] JoeJulian: i want to get this out of a j2 template and into a task loop
20:33 JoeJulian Ah
20:34 JoeJulian Why?
20:34 delhage joined #gluster
20:35 tom[] otherwise i have to write the template to a file and then run it through a shell command and then i get nausea
20:35 tom[] but maybe it's for the best
20:37 JoeJulian gluster --mode=script < /etc/glusterfs/templated_volume.txt
20:37 JoeJulian That way you're using your template for creating a file, as it natural, then parsing that file with the gluster command, which is sane.
20:38 tom[] will gluster accept a file name instead of stdin?
20:39 JoeJulian no
20:39 JoeJulian Should, but no.
20:39 tom[] ok
20:39 tessier_ http://community.gluster.org/q/what-s-a-recommended-raid-level-to-use-underneath-glusterfs/ is a 404. :( I'm surprised any RAID is needed at all in a gluster server. Is the data not replicated between servers?
20:40 JoeJulian tessier_: Yes, in a replicated volume you would configure replication between servers. Some people choose to use raid for additional fault tolerance, some don't.
20:40 tom[] can that batch file that i pipe in have several commands?
20:41 gildub joined #gluster
20:43 tessier_ JoeJulian: Ah, ok. Because if I don't have to add any more RAID then we will be ok for capacity if I migrate my current setup.
20:45 tessier_ Does gluster basically do block-level replication?
20:45 Bosse joined #gluster
20:46 tessier_ http://joejulian.name/blog/glusterfs-replication-dos-and-donts/ Hey, this looks handy. :)
20:46 semiosis tessier_: file level replication
20:46 semiosis and file distribution
20:51 MugginsM joined #gluster
20:53 JoeJulian @which brick
20:53 glusterbot JoeJulian: To determine on which brick(s) a file resides, run getfattr -n trusted.glusterfs.pathinfo $file through the client mount.
20:54 coredump So, tons of small files in many deep level directories proved to be extremely slow for gluster (at least when I try to do a find -ctime) I guess it's because of the attr reading. Can anyone relate?
20:55 JoeJulian @meh
20:55 glusterbot JoeJulian: I'm not happy about it either
20:55 tessier_ semiosis: Right but I want to use gluster to backend my Xen VMs. Each Xen VM would be a file. That's what I'm trying to figure out: How well would that work?
20:55 elyograg coredump: yes, our gluster install is very slow at gathering statistical information.
20:55 tessier_ If I have a 500G VM is it going to be copying the whole 500G file around every time I make a change?
20:56 JoeJulian tessier_: That's the way most people do it. And no, the posix standard does not require copying a whole file every time you change something.
20:56 coredump elyograg: found anything that help?
20:57 tessier_ JoeJulian: Right but for replication purposes...if I change one byte in the middle of a 500G file how does gluster replicate that change to another node?
20:57 elyograg coredump: avoiding those operations. :)
20:57 coredump lol
20:57 JoeJulian Same as a disk. It reads the block, changes the byte, and writes the block back out.
20:58 tessier_ Ok, so it can do block level changes. Cool.
20:58 T0aD joined #gluster
20:58 tessier_ As opposed to something like MogileFS for example which is more like an object store and only deals in whole files.
20:58 JoeJulian Right.
21:03 siel joined #gluster
21:35 B21956 joined #gluster
21:35 B21956 left #gluster
21:52 harish joined #gluster
21:53 daMaestro joined #gluster
21:59 plarsen joined #gluster
22:00 andreask joined #gluster
22:11 stomith left #gluster
22:12 deniszh joined #gluster
22:26 badone joined #gluster
22:26 tessier_ I've got a mix of centos 5 and 6 machines. Upgrading the 5 to 6 (or 7) is definitely in the works but is gluster known to work ok on 5? I know it's quite old by now.
22:30 zerick joined #gluster
22:33 JoeJulian Yep, still works.
22:34 tessier_ That will make this a lot easier. Now to figure out how to test it and then migrate.
22:41 tessier_ http://www.gluster.org/documentation/use_cases/Virt-store-usecase/ Wow, this is exactly what I want to do.
22:42 plarsen joined #gluster
22:45 misko_ Hello, I asked couple of days ago but either noone replied or the reply was lost in my join/leave buffer. I'm trying to set file snapshot feature on, but my distribution misses /usr/lib/x86_64-linux-gnu/glusterfs/3.6.3/xlator/features/qemu-block.so
22:45 misko_ I use debian wheezy + gluster packages from gluster.org.
22:45 JoeJulian semiosis: ^
22:48 semiosis ok
22:48 semiosis misko_: i'll look into enabling that
22:48 misko_ thank you
22:48 semiosis yw
22:58 wgao joined #gluster
22:58 bene joined #gluster
22:59 siel joined #gluster
23:01 skippy_ joined #gluster
23:01 skippy_ joined #gluster
23:03 n-st joined #gluster
23:06 social joined #gluster
23:07 jobewan joined #gluster
23:12 tdasilva joined #gluster
23:15 daMaestro joined #gluster
23:53 tessier_ Is it possible to increase the replication level at a later time?
23:53 semiosis since version 3.3.0 yes it is
23:53 tessier_ I want to do a quick test with just one brick but if it works well I will want to add another brick on another server and set replication layer to 2.
23:53 tessier_ Great, thanks!
23:53 semiosis add-brick replia N <new-brick> ...
23:57 XpineX joined #gluster
23:59 nishanth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary