Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 gildub joined #gluster
00:17 davidbitton joined #gluster
00:51 elico left #gluster
00:51 topshare joined #gluster
01:01 soumya joined #gluster
01:11 tg2 http://notepad.cc/hokicli13
01:11 tg2 there it  is semi-documented
01:11 tg2 if anybody has access to RHEL/RHS to test if they see the same speeds that would be cool
01:13 calisto_ joined #gluster
01:16 soumya joined #gluster
01:18 calisto__ joined #gluster
01:22 calisto_ joined #gluster
01:27 topshare joined #gluster
01:33 topshare joined #gluster
01:42 hagarth joined #gluster
01:51 harish_ joined #gluster
01:52 harish_ joined #gluster
02:29 haomaiwa_ joined #gluster
02:29 haomaiwa_ joined #gluster
02:30 haomaiwang joined #gluster
02:31 haomaiwa_ joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:56 davidbitton joined #gluster
02:59 nangthang joined #gluster
02:59 tg2 just tested on RHEL 7.1 same results
03:00 tg2 just traced the running fsd process, 76% of cpu time sitting in futex vs 12% of time in pwrite and writev
03:17 bharata-rao joined #gluster
03:40 lalatenduM__ joined #gluster
03:47 ppai joined #gluster
03:47 dusmant joined #gluster
03:50 itisravi joined #gluster
04:03 kumar joined #gluster
04:04 poornimag joined #gluster
04:05 hagarth joined #gluster
04:10 bala joined #gluster
04:10 atinmu joined #gluster
04:10 gnudna joined #gluster
04:11 rafi joined #gluster
04:11 gnudna hi guys anybody able to help with a split-brain issue
04:11 gnudna 2 server with replicated brick
04:12 gnudna aka equivalent to a raid1 ;)
04:13 lalatenduM___ joined #gluster
04:13 lalatenduM__ joined #gluster
04:14 gnudna i got a vm_iamge that shows up as split-brain how can i find it's gfid
04:14 gnudna so i can remove references from .gluster folder?
04:15 tg2 semiosis - if I install glusterfs-dbg how can I get it to run with debug symbols on ?
04:15 tg2 I still get "BuildID[sha1]=feca7e51ba198​5ce4901c36db9768bf3528ea5fb, stripped" after installing glustersf-dbg
04:17 ndevos tg2: you might need -dbg packages for all the libraries that the glusterfs packages link against?
04:17 * ndevos doesnt know how debian/ubuntu exactly does it though
04:17 tg2 i think they are all in 1 package
04:17 tg2 ah I think I see here
04:17 kanagaraj joined #gluster
04:18 tg2 -> /usr/lib/debug/.build-id/
04:18 ndevos tg2: I mean, libraries like glibs and openssl and whatnot
04:18 ndevos s/glibs/glibc/
04:18 glusterbot What ndevos meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
04:18 ndevos :P glusterbot
04:18 tg2 lol
04:18 tg2 yeah ndevos there is a package in the repo which is the dbug symbols
04:19 tg2 but I can't seem to instrument with them
04:19 ndevos ~xattrs | gnudna
04:19 glusterbot ndevos: Error: No factoid matches that key.
04:19 ndevos ~gfid | gnudna
04:19 glusterbot gnudna: The gfid is a uuid that's assigned to represent a unique inode that can be identical across replicas. It's stored in extended attributes and used in the .glusterfs tree. See http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/ and http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
04:19 ndevos ~getfattr | gnudna
04:19 glusterbot ndevos: Error: No factoid matches that key.
04:20 ndevos hmm, well, something like: getfattr -m. -ehex -d /path/on/brick/to/the/file.qcow2
04:20 tg2 ?
04:21 lalatenduM___ joined #gluster
04:21 gnudna perfect
04:21 tg2 anyway the glusterfs-dbg package has debug symobls for many ubild IDs but not the glusterfsd that is in the same repo :|
04:21 gnudna exactly what i was looking for
04:21 gnudna i think anyways ;)
04:22 ndevos tg2: ah, okay, I would not know about that - just make sure the versions match?
04:22 tg2 dpkg-query -L glusterfs-dbg | grep "feca7e51ba"
04:22 tg2 nadda
04:22 tg2 they do
04:23 ndevos I dont kno,w I only use rpms and they work fine...
04:23 tg2 Package: glusterfs-dbg, Version: 3.6.2-ubuntu1~trusty3, Maintainer: semiosis
04:23 tg2 Package: glusterfs-server, Version: 3.6.2-ubuntu1~trusty3, Maintainer: semiosis
04:24 ndevos yeah, https://launchpad.net/~gluster should be the place to report bugs with those packages, I think
04:24 tg2 i can just compile from source, but figured package maint might want to have a look and see if dbg was up to date with what was in the ppa
04:25 ndevos indeed, and it should get fixed, but that gluster ppa is the current source for ubuntu packages, not semiosis his personal one
04:26 ndevos semiosis is still one of the maintainers, but he's sharing the work :)
04:26 tg2 his is in the docs
04:26 tg2 http://www.gluster.org/community/documen​tation/index.php/Getting_started_install
04:26 tg2 > Then add the community GlusterFS PPA: sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.5
04:26 tg2 etc
04:26 tg2 also it is the most up to date
04:27 ndevos well, then the docs are wrong, he updated the README on download.gluster.org ...
04:27 ndevos how strange?!
04:27 tg2 gluster docs
04:27 tg2 only like 80 copies
04:27 tg2 for each minor
04:28 ndevos like http://download.gluster.org/pub/gluster/g​lusterfs/3.5/LATEST/Debian/wheezy/README
04:28 spandit joined #gluster
04:28 ndevos wait, thats not even a ppa
04:29 tg2 I see here
04:29 tg2 http://download.gluster.org/pub/gluster/​glusterfs/3.6/3.6.2/Debian/wheezy/README
04:29 * ndevos doesnt know and gives up on the ubuntu topic
04:29 tg2 i guess the short explaination is that ppa to ubuntu would be the pip to python
04:30 ndevos oh, yes, I get the ppa bit, I just dont know the url that should contain the packages
04:30 tg2 also, I used the official gluster ppa when I installed
04:30 tg2 https://launchpad.net/~gluster​/+archive/ubuntu/glusterfs-3.6
04:30 ndevos I *thought* the gluster launchpad one was the latest and greatest
04:31 tg2 yeah that is what I am using
04:31 tg2 gluster/glusterfs-3.6
04:31 ndevos righ, but that is not ppa:semiosis/ubuntu-glusterfs-3.5
04:31 tg2 no but he is the maintainer
04:31 tg2 for the current one
04:32 tg2 in gluster official and his own (I don't think he keeps his currnet any more, it is in the main gluster)
04:32 ndevos ah, bugs for the debian/ubuntu packages go here: https://github.com/semiosi​s/glusterfs-debian/issues/
04:32 tg2 his used to have the latest but I think you guys changed that into the official gluster to avoid exactly this ;D
04:32 tg2 he is still listed as the maintainer tho
04:32 gem joined #gluster
04:33 ndevos yes, semiosis is one of the maintainers for those packages, and I think they use the gluster launchpad team to provide them
04:33 T3 joined #gluster
04:33 anoopcs joined #gluster
04:33 jiffin joined #gluster
04:35 tg2 https://github.com/semiosis​/glusterfs-debian/issues/9
04:35 tg2 ;)
04:35 tg2 thanks I didn't know he was tracking on github would have been easier :D
04:36 jiffin joined #gluster
04:36 tg2 i just want to make some flamegraphs to annoy purpleidea
04:36 tg2 all this trouble!
04:36 tg2 ;D
04:36 nishanth joined #gluster
04:39 victori joined #gluster
04:42 nbalacha joined #gluster
04:45 anoopcs joined #gluster
04:47 RameshN joined #gluster
04:50 gnudna left #gluster
04:55 SOLDIERz joined #gluster
04:56 kshlm joined #gluster
04:57 lalatenduM joined #gluster
05:02 T3 joined #gluster
05:03 schandra joined #gluster
05:04 victori joined #gluster
05:11 ndarshan joined #gluster
05:13 meghanam joined #gluster
05:15 atalur joined #gluster
05:16 anil joined #gluster
05:18 Apeksha joined #gluster
05:18 deepakcs joined #gluster
05:19 nishanth joined #gluster
05:19 Apeksha_ joined #gluster
05:24 Manikandan joined #gluster
05:24 Manikandan_ joined #gluster
05:24 ashiq joined #gluster
05:35 kdhananjay joined #gluster
05:40 Bhaskarakiran joined #gluster
05:42 ppp joined #gluster
05:44 ramteid joined #gluster
05:46 Manikandan_ joined #gluster
06:02 soumya joined #gluster
06:04 T3 joined #gluster
06:07 meghanam joined #gluster
06:08 vimal joined #gluster
06:14 poornimag joined #gluster
06:14 Bhaskarakiran joined #gluster
06:22 dusmant joined #gluster
06:29 meghanam joined #gluster
06:35 SOLDIERz joined #gluster
06:40 lyang0 joined #gluster
06:41 schandra joined #gluster
06:45 bala joined #gluster
06:45 atalur joined #gluster
06:48 kiwnix joined #gluster
06:48 kiwnix joined #gluster
06:51 atinmu joined #gluster
07:01 lifeofguenter joined #gluster
07:05 T3 joined #gluster
07:06 kovshenin joined #gluster
07:16 kumar joined #gluster
07:18 Philambdo joined #gluster
07:22 jtux joined #gluster
07:26 joshin left #gluster
07:30 liquidat joined #gluster
07:30 LebedevRI joined #gluster
07:51 atinmu joined #gluster
07:53 kovshenin joined #gluster
07:54 schandra joined #gluster
07:58 dusmant joined #gluster
08:07 [Enrico] joined #gluster
08:11 poornimag joined #gluster
08:11 soumya joined #gluster
08:14 aravindavk joined #gluster
08:18 topshare joined #gluster
08:23 topshare joined #gluster
08:30 Norky joined #gluster
08:31 anrao joined #gluster
08:36 raghu joined #gluster
08:38 hgowtham joined #gluster
08:45 harish joined #gluster
08:46 dusmant joined #gluster
08:49 anoopcs joined #gluster
08:53 ctria joined #gluster
08:54 T3 joined #gluster
08:55 topshare joined #gluster
08:57 Slashman joined #gluster
08:58 anti[Enrico] joined #gluster
09:00 atalur joined #gluster
09:01 topshare joined #gluster
09:01 schandra joined #gluster
09:05 topshare joined #gluster
09:07 nangthang joined #gluster
09:12 Dw_Sn joined #gluster
09:12 T0aD joined #gluster
09:13 itisravi_ joined #gluster
09:17 meghanam joined #gluster
09:18 deniszh joined #gluster
09:24 dusmant joined #gluster
09:27 doekia joined #gluster
09:33 itisravi_ joined #gluster
09:41 jflf joined #gluster
09:50 dusmant joined #gluster
09:56 nshaikh joined #gluster
09:58 [Enrico] joined #gluster
10:24 dusmant joined #gluster
10:31 Dw_Sn joined #gluster
10:32 ashiq joined #gluster
10:32 lifeofguenter joined #gluster
10:42 T3 joined #gluster
10:59 lifeofguenter joined #gluster
10:59 AdrianH joined #gluster
10:59 Dw_Sn joined #gluster
10:59 ricky-ticky joined #gluster
11:01 Prilly joined #gluster
11:03 AdrianH Hi, I am getting "mismatching layouts" and "disk layout missing" in Gluster's logs. I guess to fix this I need to run "gluster volume rebalance VOLNAME fix-layout start". Is it OK to run this command and read/write to Gluster at the same time ?
11:04 Prilly joined #gluster
11:06 RayTrace_ joined #gluster
11:08 AdrianH Anyone knows this? I can't find this information in the documentation....
11:08 ctria joined #gluster
11:10 Prilly Intell has a new SoC
11:11 Prilly anyone considerd those SoC for Gluster?
11:11 misc which soc ?
11:14 mikedep333 joined #gluster
11:22 Prilly Iam reffering to the new xeon D plattform with 2 new cpus D-1520 and D-1540
11:24 o5k joined #gluster
11:24 Prilly Atualy iam considering the D-1520 Soc, this will probably be good enough
11:26 Prilly http://www.storagereview.com/supermicro_launches_n​ew_line_of_low_power_high_density_server_solutions
11:31 Prilly the specs of that soc is incredible, it also has some nice nics 2x10gbit
11:33 jflf AdrianH: did one of your servers go offline, or did you suffer a split brain?
11:34 Prilly supermicro has 2 new micro-itx boards Supermicro X10SDV-TLN4F and X10SDV-F
11:34 Prilly http://www.servethehome.com/intel-xeo​n-d-1500-platforms-supermicro-d-1540/
11:35 side_control joined #gluster
11:36 Prilly those x10sdv cards must be ideal for a gluster node?
11:43 tuxcrafter joined #gluster
11:48 dusmant joined #gluster
11:52 atinmu joined #gluster
11:54 yossarianuk hi I have 2 questions ....
11:54 AdrianH jflf: I resized the LVM under the bricks but couldn't do them all at the same time (long story) so for a couple days I had 2x4TB bricks and 2x2TB bricks
11:55 AdrianH everything seems to be working fine, it's just those messages in the logs...
11:55 lifeofguenter joined #gluster
11:55 yossarianuk 1. Is there a way within glusterfs to only replicate at certain times ? (or should I just use cron to start/stop the glusterd service?)
11:56 yossarianuk 2. I am about to setup a volume between 2 servers - I want to create a volume using a folder that has files in already - the same files/folders exist on both servers - will that work ?
11:57 itisravi joined #gluster
11:58 AdrianH (I am not sure) but for 2. I want to say no. why not just copy the files to Gluster once you have finished setting it up?
11:58 soumya joined #gluster
11:58 T3 joined #gluster
11:59 JustinClift *** REMINDER: Weekly Gluster Community meeting starts in #gluster-dev in 1 minute ***
11:59 nbalacha joined #gluster
12:01 yossarianuk AdrianH: cheers - its to try to avoid having to copy 150GB if I do not need to...
12:02 kovshenin joined #gluster
12:03 AdrianH 150GB is nothing
12:03 AdrianH 150TB i can understand
12:05 yossarianuk also - (say i'm creating a volume from a blank folder) - if I use (for example) the folder /exports/gluster (on both sides) to create the volume,  once it is running if I added a file to /exports/gluster it should replicate ?
12:05 itisravi_ joined #gluster
12:06 ctria joined #gluster
12:06 yossarianuk (also i'm assuming the folder (lost+found) won't upset anything ?
12:14 AdrianH yossarianuk: I am not sure, this is what I know, I created my setup using bricks (4*2 bricks), I never write directly to the bricks, I mount the gluster volume on the server that needs it and write to that mount point. Gluster will then distribute and replicate (that's my setup) to the other bricks
12:15 yossarianuk AdrianH: thank you - i'll give it a go (and see what occurs)
12:16 AdrianH yossarianuk: you're welcome
12:18 poornimag joined #gluster
12:20 o5k_ joined #gluster
12:25 jflf Sorry for the delay -- yes normally Gluster will replicate automatically between different bricks
12:26 jflf So AdrianH in your case I guess that what you would need is a heal cycle
12:27 yossarianuk AdrianH: + jflf:  so just to confirm  - once I have made a volume (i.e folder - /exports/gluster) - if I add a file to /exports/gluster it will replicate?  Or do I need to mount the glusterfs share (on either machine ?)  - sorry if this is an idiotic question..
12:27 Pupeno joined #gluster
12:27 Pupeno joined #gluster
12:27 AdrianH "jflf": ok, thanks I'll look into that
12:27 jflf AdrianH: also, have a look at JoeJulian 's http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/
12:28 dusmant joined #gluster
12:29 jflf yossarianuk: sorry, I skimmed over your original message -- I don't know if it will replicate files that are already in one of the bricks but not the other.
12:29 jflf AdrianH: any option to backup your data before starting a healing cycle?
12:32 kshlm joined #gluster
12:32 AdrianH jflf: I didn't think it was that bad, yes I have a backup (don't want to use it if I don't). I'm not sure I understand all this, do those 2 message mean I have a "split brain"?
12:32 yossarianuk jflf: cheers - just to confirm the files are identical in both servers (i have md5sum checked also..) if that helps
12:32 xoritor pjschmitt, you ask about my licenses right?
12:34 jflf AdrianH: well that's what I suspect given the original messages -- that the metadata was updated on some of the bricks and not the others.
12:35 jflf You can try with a fix-layout first, if the data is the same it shouldn't touch it.
12:36 jflf Disclaimer: I have tried a bunch of failure cases when testing Gluster in the first place, but I haven't seen that specific issue so I haven't figured out how to recover from it.
12:36 AdrianH jflf: I've done a gluster volume heal VOL_NAME info, and there is no files that need healing
12:37 jflf Alright
12:37 jflf So really the data is the same everywhere.
12:38 firemanxbr joined #gluster
12:41 T3 joined #gluster
12:43 hchiramm_ joined #gluster
12:46 ppp joined #gluster
12:54 AdrianH jflf: now I have a bunch of "metadata self heal  is successfully completed" in the logs. Everything seems OK, just those message earlier. I won't do anything for the moment and keep an eye on my Gluster logs. Thanks for your help
12:55 jflf Excellent, I hope that those issues are gone
12:55 jflf That's how I fixed my system when I unplugged one of the bricks on the fly to see what would happen. :)
12:57 AdrianH All I did was grow the LVM under my bricks but had to stop half way because couldn't get more disks. A couple days later I added the missing disk so all my bricks were the same size. Nothing stopped working, I only saw those messages because I was checking the logs.
13:01 Slashman_ joined #gluster
13:01 hagarth joined #gluster
13:03 kanagaraj joined #gluster
13:04 [Enrico] joined #gluster
13:05 dusmant joined #gluster
13:08 pkoro joined #gluster
13:12 nishanth joined #gluster
13:14 wkf joined #gluster
13:19 harish joined #gluster
13:22 luis_silva joined #gluster
13:25 T3 joined #gluster
13:26 B21956 joined #gluster
13:30 anoopcs joined #gluster
13:32 georgeh-LT2 joined #gluster
13:44 jmarley joined #gluster
13:51 dgandhi joined #gluster
13:51 kshlm joined #gluster
13:52 dgandhi joined #gluster
13:53 dgandhi joined #gluster
13:55 dgandhi joined #gluster
14:00 topshare joined #gluster
14:04 plarsen joined #gluster
14:06 topshare joined #gluster
14:07 topshare joined #gluster
14:23 bennyturns joined #gluster
14:25 topshare joined #gluster
14:27 theron joined #gluster
14:32 lalatenduM joined #gluster
14:33 plarsen joined #gluster
14:36 side_control joined #gluster
14:41 SOLDIERz_ joined #gluster
14:42 o5k joined #gluster
14:48 Apeksha joined #gluster
14:49 lifeofguenter joined #gluster
14:54 anoopcs joined #gluster
14:55 bala joined #gluster
14:57 SOLDIERz__ joined #gluster
15:02 yossarianuk hi - sooo I added a new lvm partition to use as a glusterfs volume - I used the path - /exports/gluster/test1/ (both servers) - created the volume and started the vol - if I add a file to  /exports/gluster/test1/ on eother server nothing is replicated
15:03 yossarianuk is it because I also have to mount the glusterfs volume on one/both of the server?
15:03 aravindavk joined #gluster
15:07 Folken_ ow did you mount hte volume
15:07 Folken_ you dont write directly to the gluster path, you need to write tot he gluster mount point
15:08 yossarianuk Folken_: thank you for the confirmation !
15:08 yossarianuk will try that !
15:08 side_control joined #gluster
15:10 n-st joined #gluster
15:14 lifeofguenter joined #gluster
15:16 elico joined #gluster
15:18 glusterbot News from newglusterbugs: [Bug 1200879] initialize child_down_cond conditional variable. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1200879>
15:19 bchilds so i have a 2 node 6 brick setup with stripe 3 replica 2.. i created a 3GB file and was expecting to see it piece acrsoss the bricks.. my trusted.pathinfo looks like: trusted.glusterfs.pathinfo="(​<DISTRIBUTE:StripeVolume-dht> (<STRIPE:StripeVolume-stripe-0:[131072]> (<REPLICATE:StripeVolume-replicate-0> <POSIX(/mnt/brick1/stripe/2):vm-1​:/mnt/brick1/stripe/2/words8.txt> <POSIX(/mnt/brick1/stripe/1):vm-1​:/mnt/brick1/stripe/1/words8.txt>
15:19 bchilds why isn't it breaking the file across the bricks?
15:24 JoeJulian Why do you think it's not?
15:25 JoeJulian Oh, nevermind. I haven't had my coffee yet.
15:25 gem joined #gluster
15:25 JoeJulian bchilds: is it a sparse file?
15:27 bchilds i'm not sure
15:27 bchilds i just copied a the dictionary file like 10000x
15:27 bchilds how do i check or make it sparse?
15:28 JoeJulian If it's not big chunks of nulls then it's not sparse.
15:28 bchilds its 3GB of words
15:28 bchilds here's the volume info
15:29 JoeJulian I wonder if pathinfo is broken for stripe.
15:29 bchilds http://pastebin.com/raw.php?i=CcYe3kud
15:29 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:29 bchilds its the StripeVol in that setup
15:30 bchilds i'm wanting to update the glusterfs-hadoop plugin to support striped volumes with blocked data
15:30 wushudoin joined #gluster
15:31 JoeJulian I'm duplicating your test.
15:31 crashmag joined #gluster
15:32 jbrooks joined #gluster
15:35 JoeJulian bchilds: yep, that's a bug. Do you want to file a bug report, or should I?
15:35 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
15:36 bchilds your bug might be better than mine.. what is the bug exactly?  trusted.pathinfo is wrong for striped volumes?
15:36 JoeJulian And the bug is with pathinfo. The file does get striped.
15:40 coredump joined #gluster
15:41 deepakcs joined #gluster
15:42 lifeofguenter joined #gluster
15:47 pelox joined #gluster
15:48 lifeofguenter joined #gluster
15:52 JoeJulian bchilds: Is your hadoop feature that depends on this being tracked somewhere?
15:53 bchilds well this came up as part of the work for erasure encoding
15:53 bchilds which i haven't tested yet
15:54 JoeJulian np, I was just going to link it if it was relevant.
15:54 bchilds i do have a RFE for that somewhere.. stripe support is a known latent bug
15:56 Sjors Hi all
15:56 Sjors Does anyone know if there's a way to see historic changes to a GlusterFS volume?
15:56 Sjors I.e. who, from what client, deleted a file
15:59 JoeJulian Other than looking through bash history, no.
16:00 JoeJulian I wonder if that would show up in selinux audit logs...
16:06 Prilly joined #gluster
16:09 stickyboy joined #gluster
16:11 hchiramm_ joined #gluster
16:12 bchilds joejulian : did you get that bug open?  i need to update sayan
16:12 JoeJulian cc'd you on it.
16:12 JoeJulian bug 1200914
16:12 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1200914 unspecified, unspecified, ---, bugs, NEW , pathinfo is wrong for striped replicated volumes
16:12 Pupeno_ joined #gluster
16:14 bchilds thank you sir
16:15 JoeJulian bchilds: You could probably work around it, but it complicates the process a bit. The only bit that's not deterministic is the dht subvolume.
16:15 siel joined #gluster
16:15 raz joined #gluster
16:15 raz joined #gluster
16:15 sadbox joined #gluster
16:15 masterzen joined #gluster
16:16 JoeJulian As long as you have the volume info, you can use pathinfo to get the dht subvolume (${volname}-stripe-${subvol}) and the rest is based on the volume definition.
16:17 n-st joined #gluster
16:18 glusterbot News from newglusterbugs: [Bug 1200914] pathinfo is wrong for striped replicated volumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1200914>
16:25 gem joined #gluster
16:28 virusuy joined #gluster
16:28 virusuy joined #gluster
16:28 dusmant joined #gluster
16:30 dusmant joined #gluster
16:33 bchilds shouldn't the start:stop be in the pathinfo too for each data block in pathinfo ?
16:33 JoeJulian bchilds: that's also easily calculated. Stripes always start on the first stripe subvolume, of course, will rotate on cluster.stripe-block-size.
16:34 JoeJulian I don't know the data structure enough to know if that's something that's easily added.
16:34 bchilds how do you find cluster.stripe-block-size ?
16:34 bchilds is that another xattr on each file?
16:34 bchilds or is that something to parse out of gluster volume info
16:34 JoeJulian It's in volume info, and it defaults to 128k
16:35 JoeJulian I assume you know you can get volume-info in xml form.
16:35 bchilds NO I DIDNT
16:35 bchilds how!??!?!
16:35 bchilds that would be so much easier than my regex scrape
16:36 bchilds looks like -xml
16:36 bchilds wow that saves me a lot
16:36 bchilds thanks!!!!!
16:36 JoeJulian you're welcome.
16:40 lifeofguenter joined #gluster
16:43 hagarth joined #gluster
16:47 T3 joined #gluster
16:59 theron_ joined #gluster
17:02 daMaestro joined #gluster
17:04 jmarley joined #gluster
17:06 lifeofguenter joined #gluster
17:14 victori joined #gluster
17:19 deniszh joined #gluster
17:25 Rapture joined #gluster
17:28 sputnik13 joined #gluster
17:35 elico joined #gluster
17:46 lifeofguenter joined #gluster
17:47 jiffin joined #gluster
18:01 firemanxbr joined #gluster
18:06 afics joined #gluster
18:14 kbyrne joined #gluster
18:32 corretico joined #gluster
18:44 lalatenduM joined #gluster
18:45 hamiller joined #gluster
19:21 rotbeard joined #gluster
19:23 diegows joined #gluster
19:30 ekuric joined #gluster
19:40 deniszh joined #gluster
19:57 calisto joined #gluster
20:06 hchiramm joined #gluster
20:07 rp__ joined #gluster
20:24 theron joined #gluster
20:27 DV joined #gluster
20:38 hagarth joined #gluster
20:56 TeddyWong joined #gluster
20:57 TeddyWong Hello all, has anyone managed to implement glusterfs with geo-replication and master-master?
21:21 T0aD joined #gluster
21:29 dgandhi joined #gluster
21:40 lifeofguenter joined #gluster
21:47 jmarley joined #gluster
21:53 ghenry joined #gluster
21:55 wkf joined #gluster
22:19 T3 joined #gluster
22:45 T3 joined #gluster
22:52 gildub joined #gluster
23:23 bala joined #gluster
23:24 JPaul joined #gluster
23:24 JPaul has anyone had issues with glusterfs-client not being able to mount during boot on Debian Wheezy?
23:25 JPaul I've discovered an issue on a server cluster I am trying to setup where the mounting fails because it says the transport endpoint is not connected, but once booting has completed I can manually mount just fine
23:26 JoeJulian I've heard of some people having problems. Seems to be due to the volume not actually being started before it tries to mount.
23:26 JPaul on the client or the server
23:26 JoeJulian server
23:27 JoeJulian But that's only been people that have been mounting from localhost on servers.
23:30 JPaul hum, yeah, they are definitely started, and I'm having this happen on all 5 of my servers, two of which are the actual gluster cluster
23:30 JPaul it's like the mounting is attempted too soon and networknig hasn't finished setting up routies yet
23:31 JPaul i added some sleep timers in the mounting scripts, and that helps when i'm just going over eth0, but if I switch to bonding it fails no matter what
23:31 JPaul thanks for talking with me, i've got to run and meet a guy about some foam panels
23:31 JoeJulian In EL based distros, we set a _netdev mount option and it's handled by init or systemd.
23:38 plarsen joined #gluster
23:41 wkf joined #gluster
23:46 JPaul joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary