Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 yinyin_ joined #gluster
00:12 diegows joined #gluster
00:17 sputnik1_ joined #gluster
00:34 MrAbaddon joined #gluster
01:11 lyang0 joined #gluster
01:28 vpshastry joined #gluster
01:29 Ark joined #gluster
01:29 Honghui_ joined #gluster
01:34 gdubreui joined #gluster
01:43 B21956 joined #gluster
01:59 harish joined #gluster
02:49 bharata-rao joined #gluster
03:09 ir8 joined #gluster
03:09 ir8 Good evening everyone.
03:10 ir8 Can I get some assistance with setting up gluster?
03:18 wgao joined #gluster
03:19 Ylannj left #gluster
03:24 ninkotech joined #gluster
03:33 sputnik1_ joined #gluster
03:33 Ark joined #gluster
03:47 itisravi joined #gluster
03:48 nishanth joined #gluster
03:48 nthomas joined #gluster
03:52 DV joined #gluster
03:52 haomaiwa_ joined #gluster
03:56 sputnik13 joined #gluster
04:00 shubhendu joined #gluster
04:02 ir8 [2014-04-28 03:58:25.762212] E [glusterfsd-mgmt.c:1398:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/var/www/vhosts/pinac/fs/web1a)
04:02 harish joined #gluster
04:02 ir8 what does actually mean?
04:02 rastar joined #gluster
04:12 hagarth joined #gluster
04:14 ir8 Ello.
04:14 ir8 hagarth: you have a few minutes by chance?
04:15 ir8 can you help me solve this issue: http://pastebin.com/bHkZJLcA
04:15 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
04:16 ir8 http://paste.ubuntu.com/7349939/
04:16 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
04:18 ndarshan joined #gluster
04:20 davinder joined #gluster
04:22 kanagaraj joined #gluster
04:23 kdhananjay joined #gluster
04:25 sputnik13 joined #gluster
04:25 hagarth ir8: [2014-04-28 04:09:01.521248] E [glusterfsd-mgmt.c:1398:mgmt_getspec_cbk]  0-mgmt: failed to fetch volume file
04:26 hagarth ir8: you should use mount -t glusterfs web1:/pinac </mnt/point>
04:29 saurabh joined #gluster
04:30 sputnik1_ joined #gluster
04:34 ir8 oooo
04:34 ir8 my question for is...
04:34 ir8 how should I create my volumes by chance?
04:34 kasturi joined #gluster
04:34 atinmu joined #gluster
04:34 ir8 Are you able to walk though this if you have a few minutes/
04:35 ir8 gluster volume create pinac replica 3 transport tcp web1:/var/www/vhosts/pinac/fs/web1a web2:/var/www/vhosts/pinac/fs/web2a web3:/var/www/vhosts/pinac/fs/web3a
04:35 ir8 is that even correct?
04:36 ir8 root@web2:/var/www/vhosts/pinac/fs# mount -t glusterfs web1:/panic /var/www/vhosts/pinac/www/
04:36 ir8 Mount failed. Please check the log file for more details.
04:37 ppai joined #gluster
04:40 lalatenduM joined #gluster
04:41 ravindran1 joined #gluster
04:45 dusmant joined #gluster
04:50 cyber_si_ ir8, gluster volume create pinac  ....   mount -t glusterfs web1:/panic
04:51 deepakcs joined #gluster
04:55 bala joined #gluster
05:04 ir8 cyber_si_: got it.
05:05 ir8 gluster volume create pinac replica 3 transport tcp web1:/var/www/vhosts/pinac/fs/web1a web2:/var/www/vhosts/pinac/fs/web2a web3:/var/www/vhosts/pinac/fs/web3a
05:05 ir8 that should work correct?
05:06 cyber_si_ what shows `gluster volume status pinac`?
05:06 ravindran1 joined #gluster
05:10 ir8 sure/
05:11 ir8 can I spaste it to you?
05:11 ir8 volume start: pinac: success
05:11 ir8 root@web1:/var/www/vhosts/pinac/fs# gluster volume status pinac
05:11 ir8 Status of volume: pinac
05:11 ir8 Gluster processPortOnlinePid
05:11 ir8 ---------------------------------------​---------------------------------------
05:11 ir8 Brick web1:/var/www/vhosts/pinac/fs/web1a49154Y8906
05:11 ir8 Brick web2:/var/www/vhosts/pinac/fs/web2a49154Y12853
05:11 ir8 Brick web3:/var/www/vhosts/pinac/fs/web3a49154Y11722
05:11 ir8 NFS Server on localhostN/ANN/A
05:11 lalatenduM @pastebin
05:11 glusterbot lalatenduM: I do not know about 'pastebin', but I do know about these similar topics: 'paste', 'pasteinfo'
05:11 ir8 Self-heal Daemon on localhostN/ANN/A
05:11 ir8 NFS Server on web3N/ANN/A
05:11 prasanthp joined #gluster
05:11 ir8 Self-heal Daemon on web3N/ANN/A
05:11 ir8 NFS Server on web2N/ANN/A
05:11 ir8 Self-heal Daemon on web2N/ANN/A
05:11 ir8 Task Status of Volume pinac
05:11 ir8 ---------------------------------------​---------------------------------------
05:11 ir8 There are no active volume taskswhoops
05:11 lalatenduM ir8, please use "pastebin"
05:11 lalatenduM @paste
05:11 glusterbot lalatenduM: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
05:11 ir8 http://paste.ubuntu.com/7350166/
05:11 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
05:12 ir8 sorry about that guys.
05:12 ir8 lalatenduM: what do you think i am missing by chance?
05:13 ir8 cyber_si_: thanks for time helping my noobish self.
05:13 lalatenduM ir8, what is the issue you are facing?
05:13 ir8 i am unble to mount this volume and share the files.
05:14 ir8 Again I also have no clue what i am doing with gluster been reading docs for a while now.
05:14 ir8 Figure i'd join here and ask for some help.
05:14 ir8 http://paste.ubuntu.com/7349939/ (have a look here)
05:14 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
05:15 cyber_si_ `failed to fetch volume file (key:/var/www/vhosts/pinac/fs/web1a)`
05:15 cyber_si_ you should mount pinac volume
05:16 cyber_si_ mount -t glusterfs web1:/pinac /var/www/vhosts/pinac/www/
05:16 ravindran1 joined #gluster
05:17 ir8 on which servers?
05:17 ir8 all of them
05:19 cyber_si_ one of them
05:19 ir8 okay got it.
05:19 Honghui_ joined #gluster
05:20 lalatenduM ir8, iptable rules and selinux give these kind of issues
05:20 ir8 After that wht about all my other nodes?
05:21 ir8 show does my data replicate by chance now?
05:22 ir8 how*
05:22 bala joined #gluster
05:26 RameshN joined #gluster
05:26 ir8 I mount the blicks?
05:26 ir8 sorry about this I am rather new to Gluster.
05:27 ppai joined #gluster
05:27 nshaikh joined #gluster
05:29 lalatenduM ir8, check http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
05:29 glusterbot Title: QuickStart - GlusterDocumentation (at www.gluster.org)
05:29 lalatenduM ir8, the admin guide is at https://github.com/gluster/glusterfs/tre​e/master/doc/admin-guide/en-US/markdown
05:29 glusterbot Title: glusterfs/doc/admin-guide/en-US/markdown at master · gluster/glusterfs · GitHub (at github.com)
05:30 mohan_ joined #gluster
05:30 lalatenduM ir8, another one , https://github.com/gluster/glusterf​s/blob/master/doc/admin-guide/en-US​/markdown/admin_setting_volumes.md
05:30 glusterbot Title: glusterfs/doc/admin-guide/en-US/m​arkdown/admin_setting_volumes.md at master · gluster/glusterfs · GitHub (at github.com)
05:31 rjoseph joined #gluster
05:32 ir8 Sure.
05:41 atinmu joined #gluster
05:42 ir8 thanks it works at this time
05:48 vpshastry joined #gluster
05:55 hagarth joined #gluster
06:00 rahulcs joined #gluster
06:00 davinder joined #gluster
06:04 edward2 joined #gluster
06:06 surabhi joined #gluster
06:09 kumar joined #gluster
06:15 benjamin_____ joined #gluster
06:16 vpshastry1 joined #gluster
06:21 rjoseph joined #gluster
06:23 davinder joined #gluster
06:25 meghanam joined #gluster
06:25 meghanam_ joined #gluster
06:30 92AAAVF40 joined #gluster
06:32 ppai joined #gluster
06:32 atinmu joined #gluster
06:38 XpineX joined #gluster
06:41 hagarth joined #gluster
06:43 badone_ joined #gluster
06:44 dubey joined #gluster
06:44 dubey Hello
06:44 glusterbot dubey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:44 ekuric joined #gluster
06:46 ir8 thanks guys!!
06:55 ctria joined #gluster
06:55 tjikkun_work joined #gluster
06:59 giannello joined #gluster
07:03 eseyman joined #gluster
07:04 the-me joined #gluster
07:08 hybrid512 joined #gluster
07:11 hybrid512 joined #gluster
07:17 ngoswami joined #gluster
07:21 keytab joined #gluster
07:22 Philambdo joined #gluster
07:33 fsimonce joined #gluster
07:36 rahulcs joined #gluster
07:40 atinmu joined #gluster
07:42 vpshastry1 joined #gluster
07:47 ThatGraemeGuy joined #gluster
07:49 gdubreui joined #gluster
07:53 Pavid7 joined #gluster
07:53 giannello joined #gluster
07:53 mkzero joined #gluster
08:02 andreask joined #gluster
08:06 psharma joined #gluster
08:16 ktosiek joined #gluster
08:20 giannello joined #gluster
08:26 keytab joined #gluster
08:28 MrAbaddon joined #gluster
08:40 Ark joined #gluster
08:42 rahulcs joined #gluster
08:43 glafouille joined #gluster
08:45 raghu joined #gluster
08:52 keytab joined #gluster
08:57 nullck joined #gluster
08:59 Chewi joined #gluster
08:59 Durzo someone mentioned my name, xchat highlighted the channel but its gone from my buffer.. im sorry whoever was asking for me.. :/
09:01 rahulcs joined #gluster
09:02 Honghui_ joined #gluster
09:07 saravanakumar1 joined #gluster
09:14 ajha joined #gluster
09:18 sputnik13 joined #gluster
09:18 Andy6 joined #gluster
09:25 RameshN joined #gluster
09:31 MrAbaddon joined #gluster
09:31 ndevos Durzo: maybe you find it in the logs - https://botbot.me/freenode/gluster/
09:31 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
09:36 ninkotech joined #gluster
09:36 ninkotech_ joined #gluster
09:37 Slashman joined #gluster
09:42 Durzo cheers ndevos
09:42 Durzo has anyone got any experience using the BD xlater in replica mode?
09:42 Durzo with LVM
09:42 davinder joined #gluster
09:47 rahulcs joined #gluster
09:47 davinder joined #gluster
09:51 hagarth joined #gluster
09:53 nullck joined #gluster
09:54 glusterbot New news from newglusterbugs: [Bug 1091898] [barrier] "cp -a" operation hangs on NFS mount, while barrier is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1091898> || [Bug 1091902] [barrier] fsync on NFS mount was not barriered, when barrier was enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1091902>
09:54 bharata-rao joined #gluster
09:54 edward2 joined #gluster
09:55 rahulcs joined #gluster
09:56 MrAbaddon joined #gluster
10:19 ppai joined #gluster
10:20 Durzo joined #gluster
10:20 Pavid7 joined #gluster
10:35 qdk_ joined #gluster
10:35 morsik left #gluster
10:38 Honghui_ joined #gluster
10:40 ppai joined #gluster
10:57 lalatenduM joined #gluster
11:04 basso joined #gluster
11:06 liquidat joined #gluster
11:07 chirino joined #gluster
11:12 harish joined #gluster
11:13 rahulcs joined #gluster
11:18 diegows joined #gluster
11:20 ppai joined #gluster
11:24 glusterbot New news from newglusterbugs: [Bug 1091935] Inappropriate error message generated when non-resolvable hostname is given for peer in 'gluster volume create' command for distribute-replicate volume creation <https://bugzilla.redhat.co​m/show_bug.cgi?id=1091935>
11:26 Honghui_ joined #gluster
11:35 haomai___ joined #gluster
11:36 Pavid7 joined #gluster
11:38 rahulcs joined #gluster
11:38 haom_____ joined #gluster
11:45 Ark joined #gluster
11:52 dusmant joined #gluster
11:53 rahulcs joined #gluster
11:54 glusterbot New news from newglusterbugs: [Bug 1091961] libgfchangelog: API to consume historical changelog. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1091961>
12:02 cvdyoung Good morning, how do I mount a glusterfs volume that was created with transport rdma,tcp as rdma?  I have my fstab set to mount the volume as: SERVERNAME_IB:/gv0 /mnt/gfs_rdma glusterfs transport=rdma,_netdev 0 0
12:04 ppai joined #gluster
12:04 DV joined #gluster
12:09 benjamin_____ joined #gluster
12:11 dusmant joined #gluster
12:15 nshaikh joined #gluster
12:16 Pavid7 joined #gluster
12:21 keytab joined #gluster
12:21 lalatenduM joined #gluster
12:22 ppai joined #gluster
12:22 itisravi joined #gluster
12:27 Ark joined #gluster
12:36 prasanth_ joined #gluster
12:37 ndk joined #gluster
12:40 Pavid7 joined #gluster
12:43 andreask joined #gluster
12:48 92AAAVF40 left #gluster
12:53 cvdyoung For a local glusterfs volume, what type of read speeds are considered acceptable?  I have a single 42G file and am seeing 668 MB/s, is that good?
12:53 JoseBravoHome joined #gluster
12:54 rwheeler joined #gluster
12:55 dlambrig_ joined #gluster
12:55 JoseBravoHome I have a gluster client 3.5.0 working in a centos 6.5, and I configured this line in the /etc/fstab "gluster1":/backups /backups glusterfs defaults,_netdev 0 0" but when I reboot the server the /backuos does not mount. Only if I type mount /backups it does
12:57 GabrieleV joined #gluster
12:58 rahulcs joined #gluster
13:00 cvdyoung Hi JoseBravoHome, does "gluster1" resolve properly?
13:02 JoseBravoHome Yes, and I tried with the IP too
13:02 JoseBravoHome The problem is that it's trying to mount when the network isn't up yet even if the _netdev is in the fstab file
13:03 JoseBravoHome I checked the logs and it was the error: [2014-04-28 12:52:48.233669] E [socket.c:2161:socket_connect_finish] 0-glusterfs: connection to 192.168.50.2:24007 failed (No route to host)
13:03 sroy joined #gluster
13:04 jmarley joined #gluster
13:04 jmarley joined #gluster
13:06 fsimonce` joined #gluster
13:14 dbruhn joined #gluster
13:14 japuzzo joined #gluster
13:15 TvL2386 joined #gluster
13:18 ctria joined #gluster
13:23 ndevos JoseBravoHome: sounds like http://mjanja.co.ke/2014/04/gluste​rfs-mounts-fail-at-boot-on-centos
13:24 vpshastry1 left #gluster
13:25 glusterbot New news from newglusterbugs: [Bug 1091648] Bad error checking code in feature/gfid-access <https://bugzilla.redhat.co​m/show_bug.cgi?id=1091648>
13:27 bennyturns joined #gluster
13:37 fsimonce joined #gluster
13:40 Slashman_ joined #gluster
13:49 fsimonce` joined #gluster
13:50 rahulcs joined #gluster
13:52 psharma joined #gluster
13:53 fsimonce joined #gluster
13:57 rahulcs joined #gluster
13:59 kaptk2 joined #gluster
14:03 Humble joined #gluster
14:04 LoudNoises joined #gluster
14:06 B21956 joined #gluster
14:13 cvdyoung JoseBravoHome - Inside of your /etc/fstab file, make certain you're using _netdev.  This tells the system to wait until networking is enabled, then mount the drive, or it thinks it's local disk.
14:14 Chewi cvdyoung: he already said he was
14:14 Guest70763 cvdyoung: Do you just pass that in as an option like  xfs  defaults,_netdev 0 0  ?
14:14 cvdyoung yes
14:15 Guest70763 cvdyoung: thank you that is a handy tip!
14:15 cvdyoung ah  I see it now
14:17 rahulcs joined #gluster
14:20 dlambrig_ left #gluster
14:25 JoseBravoHome I'm testing that ndevos suggested
14:27 somepoortech joined #gluster
14:31 jbrooks joined #gluster
14:52 jag3773 joined #gluster
14:55 glusterbot New news from newglusterbugs: [Bug 1092037] Issues reported by Cppcheck static analysis tool <https://bugzilla.redhat.co​m/show_bug.cgi?id=1092037>
14:57 B21956 joined #gluster
14:57 sjoeboo joined #gluster
14:57 nullck_ joined #gluster
15:00 Honghui joined #gluster
15:02 cvdyoung If I have a system with 3 RAID6 devices, and create brick01-03, how do I use those to create a single mount point like /home?
15:03 kkeithley lalatenduM: ping. What's the cppcheck command you used? Was it just `cppcheck .`?
15:04 lalatenduM kkeithley, I used cppcheck <path> 2>logfile , I have updated http://www.gluster.org/community/docum​entation/index.php/Fixing_Issues_Repor​ted_By_Tools_For_Static_Code_Analysis
15:04 glusterbot Title: Fixing Issues Reported By Tools For Static Code Analysis - GlusterDocumentation (at www.gluster.org)
15:04 kkeithley okay, thanks
15:05 lalatenduM kkeithley, we can use enable all warnings etc, but as of now Ubuntu main is getting blocked of only errors
15:05 lalatenduM kkeithley, please update the wiki page I missed anything
15:06 lalatenduM s/page/page if/
15:06 glusterbot What lalatenduM meant to say was: kkeithley, please update the wiki page if I missed anything
15:06 kkeithley yup
15:15 kmai007 joined #gluster
15:16 kmai007 would there be any issues if my gluster volume has different ports for its bricks?  in a 8 node setup
15:16 lmickh joined #gluster
15:21 daMaestro joined #gluster
15:26 ndevos kmai007: no, see ,,(ports) for more details
15:26 glusterbot kmai007: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
15:26 kmai007 ndevos
15:26 kmai007 i was hoping to meet you in person at summit,
15:27 jobewan joined #gluster
15:27 ndevos kmai007: ah, well, summit is not for everyone :-/
15:27 ndevos kmai007: I'll be attending FISL next week, maybe you're there too?
15:27 kmai007 ndevos: i had a 4 node gluster, then i absorbed 4 more servers that use to be 2 node glusters,
15:28 kmai007 i created a new volume across 8 storage servers, and now when i do a gluster volume status, i have several different brick ports for the volume
15:29 radez My heal info command for my volume reports I have one unsynced entry and the entry that it lists is /
15:29 shubhendu joined #gluster
15:29 kmai007 no my company won't cover int'l travel
15:29 radez is there a way to clean that up so it reports no sync errors?
15:30 ndevos kmai007: yeah, there is a counter for the ports somewhere, and that gets increased with eache new brick - thats what causes the different ports for you
15:30 kmai007 radez: how many storage servers?
15:30 radez kmai007: 3
15:30 kmai007 so do all of them report the same offender?
15:30 radez http://fpaste.org/97479/98699024/
15:30 glusterbot Title: #97479 Fedora Project Pastebin (at fpaste.org)
15:31 radez yup
15:31 kmai007 ndevos: that makes sense, but is it ok to operate on different ports on different bricks for the same volume?
15:31 ndevos kmai007: yes, it really is not an issue
15:32 kmai007 ndevos: thanks for the validation
15:32 kmai007 sorry reassurance
15:32 ndevos no problem!
15:32 kmai007 radez: have you verified that / is not an issue on those bricks?
15:33 kmai007 are any results of info split-brain ?
15:34 kmai007 ndevos: i'm reading your guide on wireshare now, man i have an unusal issue that i experienced in production that i cannot recreate in my DEV environment that I cannot see from debug logs so i'm going to try wireshark
15:34 radez kmai007: the files and their sizes match exactly on the bricks, I've not looked at split brain
15:34 Pavid7 joined #gluster
15:36 kmai007 what version of glusterfs ?
15:36 kmai007 to my understanding, the only way to reset the "heal info" results is to recycle glusterd
15:36 kmai007 thats the only way I am able to clear my info split-brain results, and ensure I corrected the appriopriate files
15:36 ndevos kmai007: I'm alsmost done for the day, but if you need assistance on debugging with wireshark, send an email to gluster-devel@gluster.org and I'll try to respond tomorrow morning
15:37 radez yea split-brain is full of entries for /
15:37 kmai007 thanks ndevos have a good night
15:37 radez kmai007: when you say recycle do you mean just restart the service on each host?
15:37 kmai007 radez: fpaste output
15:37 ndevos cya kmai007!
15:37 hagarth joined #gluster
15:38 radez kmai007: http://fpaste.org/97483/98699471/ there's 553, 554 or 555 results for each of the entries in the last post
15:38 glusterbot Title: #97483 Fedora Project Pastebin (at fpaste.org)
15:38 radez they all look like this, a date from last week and referencing root
15:40 kmai007 hmmm....is this replicated or distributed ?
15:41 radez relpicated, I just bounceed glusterd on the second two nodes (14 and 15 in fpaste) 14 has recovered now...
15:41 radez oh and version is glusterfs-server-3.5.0-2.el6.x86_64
15:43 kmai007 so it cleared from your heal info output?
15:43 kmai007 i've not used 3.5.0 yet, i'm on 3.4.2
15:43 radez hm, nope, maybe it was still refreshing itself. back to the same status
15:43 * radez checks split-brain
15:44 radez looks like the splitbrain logs are just filling back up again
15:44 radez http://fpaste.org/97486/98699883/
15:44 glusterbot Title: #97486 Fedora Project Pastebin (at fpaste.org)
15:45 kmai007 if its replicated, does it have the same GFID for / compared to a brick that does not report the problem?
15:47 radez oh so it looks like it distributed-replicated
15:47 radez where do I find the gfid?
15:50 kmai007 (splitbrain)
15:50 kmai007 @splitbrain
15:50 glusterbot kmai007: To heal split-brain in 3.3+, see http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ .
15:50 kmai007 radez: if you have RH you can see https://access.redhat.com/site/solutions/193843
15:50 glusterbot Title: How to recover a file from a split-brain on a Red Hat Storage volume ? - Red Hat Customer Portal (at access.redhat.com)
15:51 ctria joined #gluster
15:51 davinder joined #gluster
15:52 vpshastry joined #gluster
15:52 radez kmai007: I'll give that a try thx
15:53 keytab joined #gluster
15:56 rwheeler joined #gluster
15:59 keytab joined #gluster
15:59 rwheeler_ joined #gluster
16:01 dewey joined #gluster
16:18 theron joined #gluster
16:18 mjsmith2 joined #gluster
16:22 Mo__ joined #gluster
16:25 vpshastry left #gluster
16:30 Matthaeus joined #gluster
16:45 Ark joined #gluster
16:49 SFLimey joined #gluster
16:59 zerick joined #gluster
17:09 kmai007 what would be AWESOME is when i run 'gluster volume status' it would also provide status of the brick that was "offline"
17:12 tdasilva joined #gluster
17:20 ira joined #gluster
17:21 kmai007 i guess my dumb butt should have checked gluster peer status
17:21 sadbox joined #gluster
17:21 kmai007 and know that i stopped glusterd on a brick to do some maintenance....LOL
17:23 sputnik1_ joined #gluster
17:28 glafouille joined #gluster
17:31 MrAbaddon joined #gluster
17:31 msp3k1 joined #gluster
17:33 chirino joined #gluster
17:33 msp3k1 Hi guys,  I have a problem.  The root partition of one of my nodes filled up.  One of the log files for the brick ate up all the space.  I've fixed the full partition problem, but glusterd won't start.  It says, "Initialization of volume 'management' failed, review your volfile again".  How can I fix this?
17:34 bennyturns joined #gluster
17:34 theron joined #gluster
17:35 zaitcev joined #gluster
17:36 dbruhn msp3k1, you will need to copy the volume files from one of the good hosts
17:36 dbruhn how many gluster servers do you have?
17:37 ricky-ticky joined #gluster
17:37 msp3k1 So copy over /etc/glusterfs/glusterfsd.vol from another host?  I can do that.
17:37 dbruhn sec
17:37 kmai007 there isa CLI cmd to do that
17:38 dbruhn I never remember it
17:38 kmai007 volume sync <HOSTNAME> [all|<VOLNAME>] - sync the volume information from a peer
17:38 dbruhn kmai007, can you add that to this page
17:38 dbruhn http://www.gluster.org/community/documentat​ion/index.php/Basic_Gluster_Troubleshooting
17:38 glusterbot Title: Basic Gluster Troubleshooting - GlusterDocumentation (at www.gluster.org)
17:38 kmai007 dbruhn: i dunno how, i don't have privledge
17:38 msp3k1 # gluster volume sync bkupc1-a
17:38 msp3k1 Connection failed. Please check if gluster daemon is operational.
17:39 dbruhn msp3k1, you'll also probably want to setup log rotate to make sure it doesn't happen in the future
17:39 msp3k1 I can't start glusterd until I sync the volume file, but I can't sync the volume file until I start glusterd?
17:40 dbruhn sec
17:40 msp3k1 The /etc/glusterfs/glusterd.vol is identical to the one on the other host.
17:40 dbruhn backup your /var/lib/glusterd files on all of the servers before doing anything
17:41 dbruhn then delete everything except the glusterd.info file, this has the UUID for the server
17:41 dbruhn then restart gluster
17:41 dbruhn and then run the command
17:42 msp3k1 On each host, I have run: tar -cvf gluster-backup-140428.tar /var/lib/glusterd
17:43 msp3k1 Should I delete files from /var/lib/glusterd only on the affected host, or on all hosts?
17:43 kmai007 only the affected host is what i recall
17:43 dbruhn yep only the effected host
17:44 dbruhn don't delete that info file though
17:44 dbruhn that file identifies your server to the group
17:44 msp3k1 On the affected host, /var/lib/glusterd/glusterd.info is zero bytes.
17:44 dbruhn you will need to run a peer probe from the effected host
17:44 dbruhn and then from one of the good hosts too
17:44 dbruhn if I remember right
17:44 msp3k1 I don't suppose there's a way to rebuild the glusterd.info file from information on the other hosts?
17:44 dbruhn yep there is
17:45 dbruhn on one of the good hosts
17:45 dbruhn run gluster peer status
17:45 msp3k1 Awesome.
17:46 LessSeen_ joined #gluster
17:46 andreask joined #gluster
17:46 dbruhn the file contains a single line UUID=
17:46 dbruhn and then the correct UUID for the server
17:46 kmai007 here is how u would find the UUID from antoher storage server
17:46 kmai007 http://gluster.org/community/documen​tation/index.php/Gluster_3.4:_Brick_​Restoration_-_Replace_Crashed_Server
17:46 msp3k1 When deleting stuff from /var/lib/glusterd, should I only delete the files, and leave the directory structure intact?
17:46 glusterbot Title: Gluster 3.4: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
17:46 dbruhn I've been through this twice, I just removed everything
17:46 glafouille joined #gluster
17:47 dbruhn kmai007, just posted a useful link
17:48 msp3k1 Ok.  glusterd is running...  Running a peer probe on the affected host.
17:49 kmai007 good work msp3k1
17:49 msp3k1 All the peers see each other as connected.
17:49 Pavid7 joined #gluster
17:50 RameshN joined #gluster
17:50 msp3k1 Now I should do: gluster volume sync all -- right?
17:50 kmai007 yes if gluster volume info
17:51 kmai007 returns nothing on the broke server
17:51 kmai007 gluster volume sync <good storage> all
17:51 kmai007 i believe it grabs all the vol configs from that server to itself
17:51 andreask joined #gluster
17:52 kmai007 gluster volume info
17:52 kmai007 does it give u anything?
17:52 dbruhn kmai007, indeed it does
17:52 kmai007 i only had to do it once :-P
17:52 kmai007 so i cannot recall often
17:53 msp3k1 gluster volume info returned "No volumes present", so I've run "gluster volume sync bkupc1-a all"
17:53 kmai007 kick ass
17:53 kmai007 now on another session run gluster volume info
17:53 kmai007 and start to see it populate...
17:53 msp3k1 Now I see all the info on the affectes host
17:54 kmai007 kick ass
17:54 kmai007 now i think the last step is to run a heal
17:54 kmai007 you won't need to add the vol id's b/c it should have been still existing
17:54 dbruhn Shouldn't have to do that, but it would make sure things are in sync on the effected brick servers.
17:55 kmai007 true,
17:55 kmai007 else in the next 600 seconds the AFR will run itself
17:55 msp3k1 So on the affected host, run: gluster volume heal <volname>
17:55 kmai007 naw
17:55 kmai007 just run
17:55 kmai007 gluster volume heal <volname> info
17:55 dbruhn Also keep an eye out for split-brain issues, I had that command not full work and it actually had my servers in a weird split-brain issue, I didn't see it till later. Somehow the UUID on a server was changed.
17:55 LessSee__ joined #gluster
17:56 dbruhn msp3k1, you can run that from any of the servers
17:56 kmai007 dbruhn: as of late, i hadn't had to clean any splits....
17:56 msp3k1 Says the self-heal daemon is not running on the affected host.  Do I need to type something to start it running?
17:56 kmai007 but i'll knock on wood right now
17:57 kmai007 service glusterd restart
17:57 kmai007 then ps -ef|grep glustershd
17:57 kmai007 actually
17:57 kmai007 check for that PID first
17:58 msp3k1 Restarted gluster.
17:59 kmai007 is there a PID for glustershd
17:59 msp3k1 ps -ef | grep glustershd only shows /usr/sbin/glusterfs [...] -l /var/log/glusterfs/glustershd.log [...]
17:59 msp3k1 There is no pid for a glustershd process
18:00 kmai007 sorry is this rhel?
18:00 kmai007 centos
18:00 kmai007 fedora
18:00 kmai007 strange, i see one on my system
18:00 kmai007 [root@omhq1826 ~]# ps -ef|grep glustershd
18:00 kmai007 root     19892     1  0 10:55 ?        00:00:41 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd
18:01 msp3k1 No, this is ubuntu, with gluster 3.4.3 from the ppa
18:01 kmai007 oh my bad
18:01 kmai007 tail the log you output
18:01 kmai007 does it give you insight?
18:03 msp3k1 Looking at the logs now...
18:04 msp3k1 brb
18:07 msp3k1 back
18:07 davinder joined #gluster
18:07 Intensity joined #gluster
18:09 dbruhn msp3k1, sorry stepped away, did you by chance restart the server?
18:10 ricky-ticky1 joined #gluster
18:10 msp3k1 I did.  /var/log/glusterfs/glustersdh.log shows several lines of "Another crawl is in progress for"
18:11 dbruhn Well there is your answer
18:11 dbruhn what does "gluster volume status" output
18:11 kmai007 run this
18:11 kmai007 gluster status <volname> shd
18:12 kmai007 or gluster volume status
18:12 kmai007 lol
18:12 kmai007 localhost should have  PID
18:12 kmai007 online = Y
18:12 msp3k1 On the affected host: # gluster volume status bkupc1 shd
18:12 msp3k1 Status of volume: bkupc1
18:12 msp3k1 Gluster processPortOnlinePid
18:12 msp3k1 ---------------------------------------​---------------------------------------
18:12 msp3k1 Self-heal Daemon on localhostN/AY12361
18:12 msp3k1 Self-heal Daemon on bkupc1-aN/AY2541
18:12 msp3k1 Self-heal Daemon on bkupc1-cN/AY3070
18:12 kmai007 sounds like you have a chicken dinner
18:13 dbruhn then it's running
18:13 kmai007 winner winner
18:13 msp3k1 Awesome!
18:13 ctria joined #gluster
18:14 kmai007 dbruhn: do u have a script i can use to write a bunch of small files?
18:14 kmai007 i need to break my glusterR&D cluster
18:14 dbruhn kinda of
18:14 dbruhn http://www.artisnotcrime.com/2009/10/ba​sh-script-for-test-data-generation.html
18:14 glusterbot Title: ArtIsNotCrime: Bash script for test data generation (at www.artisnotcrime.com)
18:14 dbruhn you could do it faster using /dev/zero
18:15 dbruhn but I needed ascii for the test I built that with
18:15 kmai007 pefecto!
18:16 andreask joined #gluster
18:21 JoeJulian I spoke with Michael 'Monty' Widenius about libgfapi support for innodb on Saturday.
18:21 dbruhn really? that would be pretty awesome
18:21 JoeJulian He seemed interested.
18:23 JoeJulian He brought up the idea of wrapping the innodb library with another library to essentially just translate the glibc calls to libgfapi calls. Seems like a pretty simple idea, actually.
18:24 msp3k1 On the affected host, one of the peers shows it as not being in the cluster.  Should I be concerned about this?
18:24 coredump joined #gluster
18:25 coredump So... Today we had a switch reboot and some servers lost contact with gluster for a few minutes
18:25 kmai007 msp3k1: what cmd is giving you that output?
18:25 kmai007 gluster peer status?
18:25 dbruhn JoeJulian, totally. Interesting approach
18:25 coredump they got back but openstack keeps throwing permission denied errors, is there any chance that the connection drop may have done something to the volumes?
18:25 msp3k1 # gluster peer status
18:25 msp3k1 Number of Peers: 3
18:25 msp3k1 Hostname: bkupc1-d
18:25 msp3k1 Uuid: 93d34b9a-4f08-4473-af81-083bcadb1c1b
18:25 msp3k1 State: Sent and Received peer request (Connected)
18:25 msp3k1 Hostname: bkupc1-a
18:25 msp3k1 Uuid: ae005ef8-03be-4409-8766-ae9c858846fb
18:25 msp3k1 State: Peer in Cluster (Connected)
18:25 msp3k1 Hostname: bkupc1-c
18:25 msp3k1 Uuid: a0f03c0c-6215-47fc-ab30-8f4397872bad
18:25 msp3k1 State: Peer in Cluster (Connected)
18:26 dbruhn msp3k1, use fpaste for that stuff, so you don't accidentally get kicked from the channel
18:26 msp3k1 Ok
18:26 kmai007 getting kicked out hurts
18:27 dbruhn coredump, have you tried to access the files directly though the mount to see what's up?
18:27 kmai007 msp3k1: looks like fine there, is it a particular volume?  gluster volume info <volnam> ?
18:28 dbruhn anything in the logs?
18:28 msp3k1 gluster volume info shows bkupc1-d as being a part of the cluster.  It's just gluster peer status that says something different.
18:29 msp3k1 gluster peer status says "Peer in Cluster (Connected)" for all the other hosts.
18:35 LessSeen_ joined #gluster
18:37 msp3k1 I'm going to let it churn for a while and see what develops.  But it looks like I'm back on track for now.  THANK YOU GUYS!  Your help has been invaluable!
18:38 marcoceppi joined #gluster
18:38 marcoceppi joined #gluster
18:39 churnd i guess now is a good time to ask:  what is the best way to upgrade from 3.4 to 3.5?  glusterd has to stop, right?
18:43 kmai007 my understanding churnd, is to service glusterd stop
18:43 kmai007 killall gluster proceses
18:44 kmai007 then update package
18:44 kmai007 then service glusterd start
18:44 kmai007 then gluster volume heal <volnam> info and wait
18:44 kmai007 if everything is swell, then move on to the next
18:46 JoeJulian protip: After package updates, do "lsof | grep deleted" and see if any pids are using stale libs.
18:47 churnd kmai007 so you just do one at a time
18:49 kmai007 JoeJulian: +1, great tip, that one has bit me in my ARSE
18:49 kmai007 churnd: 1 brick storage server at a time, if u don't want an outage,
18:49 B21956 joined #gluster
18:49 kmai007 if u can afford and outage, bang them all out by stopping the gluster* processes
18:50 churnd so won't cause a problem with one being a higher revision than the other?
18:51 kmai007 in rolling update, no, i dont' believe so please see this doc for an idea. http://vbellur.wordpress.com/2013/​07/15/upgrading-to-glusterfs-3-4/
18:51 kmai007 the method would be similar to 3.5.0
18:51 _dist joined #gluster
19:06 edward1 joined #gluster
19:07 ajha joined #gluster
19:08 JoeJulian There may be an issue with rolling updates to 3.5.0. If you have a test environment I would test it first.
19:10 sjoeboo joined #gluster
19:10 churnd JoeJulian what kind of issue
19:10 churnd this is a test environment kinda
19:10 JoeJulian Looks like there's a potential issue with the way they negotiate the rpc version capabilities.
19:11 JoeJulian I've been meaning to test it myself.
19:18 churnd also what happens if there are open files on the gluster filesystem during the upgrade?
19:21 MrAbaddon joined #gluster
19:21 JoeJulian churnd: if you do a rolling update and you're using replication, nothing interesting. Make sure "heal info" is clean between server upgrades.
19:22 churnd well in my case i have a two node cluster
19:22 churnd & the brick is also mounted on the node itself
19:22 churnd & one of my devs has a file open on the gluster fs right now  :)
19:25 churnd so if i upgraded, it'd unmount that fs, right?
19:26 JoeJulian Not if you don't kill "glusterfs".
19:29 failshell joined #gluster
19:32 churnd hm, for some reason i thought doing the upgrade via "yum upgrade" would do that
19:36 kmai007 did you list the package location? if you have it downloaded locally?
19:37 churnd "yum upgrade" installs the package automatically
19:37 kmai007 yum update ?
19:37 churnd yeah that
19:38 kmai007 i use yum update glusterfs
19:38 kmai007 id ont want to update th ekernel and everything
19:40 kkeithley (sudo) `yum update gluster\*` — only updates gluster
19:45 elico what are the options for a REST API to glusterfs? was it swift? any recommendation for a 2 primary nodes for a rest api interface?
19:47 churnd i mean i thought running "yum update glusterfs" would restart glusterd, which unmounts the filesystems on the node
19:50 lanning you really don't want an rpm to mange service instances.
19:51 lanning elico: swift is OpenStack's object store interface.  It is akin to Amazon's S3 service.
19:56 cvdyoung How do I take an already existing gluster volume running on server1 and expand it with disk from server2?
19:56 glusterbot New news from newglusterbugs: [Bug 1091777] Puppet module gluster (purpleidea/puppet-gluster) to support RHEL7/Fedora20 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1091777>
19:56 lanning cvdyoung: https://www.gluster.org/community/documentat​ion/index.php/Gluster_3.1:_Expanding_Volumes
19:56 glusterbot Title: Gluster 3.1: Expanding Volumes - GlusterDocumentation (at www.gluster.org)
19:59 churnd lanning well i don't know enough about it to say differently, just going by my previous past experience... i think when I was setting it up initially, there was an update the day after & i appilied it & it caused the volumes to be unmounted b/c glusterd shut down
20:00 lanning was it shutdown, or did it crash.
20:00 lanning ?
20:00 churnd i'm not sure, just noticed that it was stopped
20:01 churnd b/c when I did it, there were open files on the filesystem by another process
20:03 lanning most likely the update caused a crash. Not a gracefull shutdown in the install scripts.
20:03 gmcwhistler joined #gluster
20:08 JoeJulian lanning, churnd: rpm -q --scripts $package_name will show you the scripts that are executed for installing/upgrading/uninstalling packages. The glusterfs packages do, indeed, restart the service despite my argument for the contrary at bug 1022542
20:08 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1022542 unspecified, unspecified, ---, ndevos, CLOSED ERRATA, glusterfsd stop command does not stop bricks
20:09 churnd so it probably couldn't restart because of the open files
20:10 JoeJulian glusterd restarts. Under systemd, glusterd, glusterfsd and glusterfs processes for nfs and glustershd restart. The fuse application, glusterfs, used for your mounts does not restart.
20:11 lanning man, I hate when I'm wrong...
20:11 JoeJulian @meh
20:11 glusterbot JoeJulian: I'm not happy about it either
20:14 churnd lol
20:15 radez kmai007: thx for the help earlier, ended up having to delete content and .glusterfs dirs brick by brick and let the server rebuild the replicas
20:15 lanning so, unless you repackage the rpms, you need to do your own gracefull service shutdown, update, then start the service. As you can't trust the rpm to "do the right thing(tm)"
20:15 DV joined #gluster
20:16 kmai007 radez: no prob. sorry you had to do that
20:16 churnd lanning yeah that's cool.  i'm by all intents and purposes a n00b at this so i'm going with the standard package repo way
20:16 radez I don't reallu understand what the issue was, but atleast there's a way to get if cleaned up. I don't have a ton of data so it wasn't too painful
20:17 JoeJulian I suggest you file a bug report expressing that sentiment. I tried to convey my feelings that clustered services are different from other services but I was a lone voice.
20:17 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
20:17 kmai007 maybe ask JoeJulian, how / woudl show up on a heal
20:17 radez heh, getting to the end of the day. i'm not typing so well anymore :)
20:17 radez JoeJulian: have a sec for me to pick your brain about what I saw earlier?
20:18 JoeJulian @split-brain
20:18 glusterbot JoeJulian: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
20:18 JoeJulian @forget splitbrain
20:18 glusterbot JoeJulian: The operation succeeded.
20:18 JoeJulian @alias split-brain splitbrain
20:18 glusterbot JoeJulian: Error: This key has more than one factoid associated with it, but you have not provided a number.
20:18 JoeJulian @splitbrain
20:18 glusterbot JoeJulian: I do not know about 'splitbrain', but I do know about these similar topics: 'split brain', 'split-brain'
20:18 kmai007 JoeJulian: where can i get a list of all the @ words?
20:18 kmai007 @
20:19 kmai007 @givemeall
20:19 dbruhn @*
20:19 dbruhn lol
20:19 JoeJulian /tell glusterbot factoids help
20:19 JoeJulian iirc
20:19 kmai007 glusterbot:factoids help
20:19 JoeJulian @forget split brain
20:19 glusterbot JoeJulian: The operation succeeded.
20:19 semiosis @factoids rank --all
20:19 glusterbot semiosis: (factoids rank [<channel>] [--plain] [--alpha] [<number>]) -- Returns a list of top-ranked factoid keys, sorted by usage count (rank). If <number> is not provided, the default number of factoid keys returned is set by the rankListLength registry value. If --plain option is given, rank numbers and usage counts are not included in output. If --alpha option is given in addition to
20:19 radez JoeJulian: what if the split-brain entry is /
20:19 glusterbot semiosis: --plain, keys are sorted alphabetically, instead of by rank. <channel> is only necessary if the message isn't sent in the channel itself.
20:19 JoeJulian Do it directly to avoid completely spamming the channel please.
20:20 semiosis right
20:20 JoeJulian radez: Good question. I would probably just delete the trusted.afr extended attributes from them all.
20:22 radez JoeJulian: kk, I'll make note incase I run into it in the future. It's the second time it's happend to this cluster. first time was Jan 3rd 2014
20:22 kmai007 how about this semiosis http://fpaste.org/97598/87165271/
20:22 glusterbot Title: #97598 Fedora Project Pastebin (at fpaste.org)
20:23 JoeJulian There was a bug a while back where .glusterfs/00/00/[0-]\+01 would be a directory instead of a symlink and that could cause that too, but that should be fixed in current releases.
20:23 kmai007 JoeJulian: while back was approx. which release?
20:25 kmai007 it pains me to try to make sense of the logging something about loc->parent over and over and over
20:25 JoeJulian bug 859581
20:25 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=859581 high, unspecified, ---, vsomyaju, ASSIGNED , self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs
20:25 JoeJulian Oh, it says it's not fixed...
20:26 JoeJulian Ok, so it's in 3.4.3
20:26 TvL2386 joined #gluster
20:26 JoeJulian but looks like it's still open 'cause it's not in RHS.
20:30 kmai007 lright just checked my prodgluster and i do not have that issue
20:30 kmai007 thank goodness
20:30 kmai007 stat /export/content/static/.glusterfs/00/0​0/00000000-0000-0000-0000-000000000001 is a link
20:32 LessSeen_ joined #gluster
20:33 DV joined #gluster
20:41 DV joined #gluster
20:44 elico lanning: So about the rest api storage object store. WEBDAV is one thing and maybe you can recommend on one?
20:44 elico I am almost sure I heard that there is an api for that on new 3.4 version.
20:46 lanning I know you can put up an apache server configured for WebDAV to a FUSE/NFS mount.  As for a direct WebDAV gateway, I have not looked.
20:47 elico well so the options are swift above glusterfs or other rest api storage based on glusterfs
20:47 lanning I am sure, with libgfapi, it should be easy to make, if one were inclined to code...
20:47 elico ho
20:48 badone_ joined #gluster
20:49 lanning I believe swift is the only REST API currently implemented that uses libgfapi to talk directly to GlusterFS. (No FUSE/NFS mount to go through)
20:50 lanning it is actually the swift proxy service, using libgfapi instead of the swift backend stores.
20:50 JoeJulian It does?!
20:51 JoeJulian I thought the python bindings were still a work in progress.
20:51 lanning That's what I saw about a year ago.  I guess it might have changed.
20:51 JoeJulian I haven't been following the swift development all that closely.
20:51 LessSee__ joined #gluster
20:52 elico JoeJulian: what do you think? webdav ?
20:52 JoeJulian I would use swift.
20:52 lanning neither have I.  Since I was told to dismantle my $10M infrastructure, as the company was going a different way.  So, now I work elsewhere...
20:55 dbruhn joined #gluster
20:56 lanning elico: I guess it really depends on what you need it for.  swift has a multitenent object store REST API.  WebDAV is more filesystem like. And can even be mounted on client OS's.
21:01 elico lanning: I don't really need but more of thinking about the available options.
21:01 elico GlusterFS by itself works fine on nfs and glusterfs mount.
21:01 elico actually it's faster for me over the NFS since it's 2 nodes in a replicate setup.
21:05 kmai007 what does it mean by quota context not set in inode?
21:05 kmai007 from the nfs.log
21:05 kmai007 then it gives a long gfid: string
21:07 elico JoeJulian: is the issue with ext4 is still present?
21:08 kmai007 has anybody had success experiences with a volume that is mounted up as NFS and FUSE on a system on different clients ?
21:08 LessSeen joined #gluster
21:08 kmai007 no locking issues....etc
21:08 ctria joined #gluster
21:10 elico kmai007: I have not tried it yet but there are probably issues with it..
21:10 JoeJulian no, ext4 is fixed
21:11 kmai007 sorry elico i was adding to my comment.
21:11 kmai007 i've not used ext4
21:11 kmai007 only xfs
21:12 cvdyoung Is there any problem with using IPs inside the gluster volume, and then mounting that same volume via a RR DNS entry?
21:12 JoeJulian no, but it's a bad idea.
21:12 JoeJulian @hostnames
21:12 glusterbot JoeJulian: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
21:13 JoeJulian If you use IPs you'll lock in your network configuration, tying that to services. You should strive to separate service configuration from infrastructure configuration.
21:17 cvdyoung I ran the peer probe, but it still shows the hostname as the IP when I run gluster peer status.  Is there anything else I need to do?
21:18 kmai007 log into another peer
21:18 JoeJulian Note the last phrase of that factoid. I suspect that was missed.
21:18 kmai007 and probe the to that host
21:18 kmai007 unless you are saying all of them show IP
21:18 kmai007 when u gluster peer status
21:19 cvdyoung Yep, both show IP as hostname after I ran peer probe
21:20 cvdyoung nevermind, I figured out what I was doing....
21:20 cvdyoung I have a 1G nice, and IB.  The hostname I was using wasn't the same IP as what's in the peer
21:22 dbruhn semiosis, do you have a link to your java/libgfapi
21:26 glusterbot New news from newglusterbugs: [Bug 1041109] structure needs cleaning <https://bugzilla.redhat.co​m/show_bug.cgi?id=1041109> || [Bug 1092158] The daemon options suggested in /etc/sysconfig/glusterd are not being read by the init script <https://bugzilla.redhat.co​m/show_bug.cgi?id=1092158>
21:27 semiosis ~java | dbruhn
21:27 glusterbot dbruhn: https://github.com/semiosis/libgfapi-jni & https://github.com/semiosis​/glusterfs-java-filesystem
21:27 dbruhn thanks
21:28 semiosis though lately i've just been pointing people to the latter, https://github.com/semiosis​/glusterfs-java-filesystem, as the main entry point
21:28 semiosis @forget java
21:28 glusterbot semiosis: The operation succeeded.
21:28 semiosis @learn java as https://github.com/semiosis​/glusterfs-java-filesystem
21:28 glusterbot semiosis: The operation succeeded.
21:29 semiosis most people shouldn't have much interest in the low level JNI binding unless they're developing glusterfs-java-filesystem (that'll be the day!)
21:31 JoeJulian I want to make a java fuse filesystem that connects to libgfapi. ;)
21:31 semiosis i want coffee.
21:31 semiosis afk
21:32 kmai007 ok here is a strange question
21:32 JoeJulian ok, can I give a strange answer?
21:32 kmai007 i had 40 glusterNFS clients mounting up the same gluster volume,  I ran a 'df ' right after I mounted it
21:33 kmai007 i wouldn't assume gluster to crap out on me because i wanted to stat the filesystem?
21:33 kmai007 strange yes, common no
21:34 JoeJulian define "crap out"
21:34 JoeJulian Because literally, no. I would never expect that.
21:34 lanning crap in, crap out?
21:34 lanning :)
21:35 MacWinner joined #gluster
21:35 kmai007 in the client logs, another volume also from the same storage bricks, said, network ping timeout, i'm disconnecting from you
21:36 kmai007 then prod folks were like uhhh....what did u do
21:36 kmai007 then i unmounted the NFS system and the gluster FUSE clients started to cascade this "disconnecting" stuff
21:37 semiosis kmai007: can you do it again?
21:37 kmai007 nope, that is what kills me
21:37 kmai007 well i bet u i could in PROD
21:37 kmai007 but obviously i'll have no job if i did that
21:37 kmai007 so i cannot recreate it in my R&D arena
21:38 kmai007 this is what i've seen on the client
21:38 lanning I wonder if you have a bad brick
21:40 JoeJulian Did you nfs mount from localhost on any of the servers?
21:41 kmai007 no i do not nfs:localhost
21:41 kmai007 on storage servers
21:42 kmai007 so i thought for redudency, i'd spread the glusterNFS mounts out
21:42 kmai007 instead of pointing to 1 storage node
21:42 kmai007 i had 8
21:43 JoeJulian I've heard of a lot more than that working successfully.
21:43 kmai007 with 40 clients, i just spread it out 5 NFS mounts per storage node, to any client
21:43 JoeJulian It sounds like some sort of deadlock but without being able to recreate it, there's no way to diagnose it.
21:43 kmai007 i know, i think its a matter of understanding my environment/what i did
21:43 JoeJulian logs, state dumps, that kind of stuff would help if you can trigger it again.
21:43 kmai007 but its so hard to recall, i didn't make any feature changes
21:44 kmai007 yeh, exactly.
21:45 kmai007 joe at first glance what would you inference from this?
21:45 kmai007 from my nfs.log in production
21:45 kmai007 http://fpaste.org/97625/72149113/
21:45 glusterbot Title: #97625 Fedora Project Pastebin (at fpaste.org)
21:46 kmai007 this is better http://fpaste.org/97627/39872155/
21:46 glusterbot Title: #97627 Fedora Project Pastebin (at fpaste.org)
21:47 kmai007 b/c i like todo more than 1 thing at a time, as I have the NFS mounts to the clients, i'm also
21:47 JoeJulian I would infer that you ran out of free memory somewhere.
21:47 kmai007 from another client rsyncin files to that volume
21:47 JoeJulian Odd, though, that they're all just warnings and no errors.
21:48 JoeJulian Ran out of memory on ALL servers though?
21:48 kmai007 iknow right that is what spooky
21:48 kmai007 0-7 says cannot allocate memory
21:49 JoeJulian That suggests to me that it was trying to allocate something that was producing an error. Perhaps a negative number if something isn't being checked for an error return.
21:49 daMaestro joined #gluster
21:50 JoeJulian Is that first item the first "Cannot allocate memory" warning in the logs?
21:51 kmai007 just thought that too, trying to reach back in time, in 1 sec. it spits out furious logs
21:51 JoeJulian As a complete aside, however, you really want to use --inplace when rsyncing to a gluster volume if you can. The renames will play hell with dht.
21:52 kmai007 is that something i've lacked
21:52 kmai007 omg, i've not heard of --inplace
21:54 kmai007 --inplace update destination files in-place
22:10 daMaestro joined #gluster
22:16 DV joined #gluster
22:18 kmai007 goodnight yall
22:19 JoeJulian later kmai007
22:24 [o__o] joined #gluster
22:25 MrAbaddon joined #gluster
22:29 lanning https://bugzilla.redhat.co​m/show_bug.cgi?id=1092178
22:29 glusterbot Bug 1092178: medium, unspecified, ---, rwheeler, NEW , RPM pre-uninstall scriptlet should not touch running services
22:30 dbruhn joined #gluster
22:34 chirino joined #gluster
22:34 DV joined #gluster
22:44 MrAbaddon joined #gluster
22:55 LessSeen_ joined #gluster
22:57 glusterbot New news from newglusterbugs: [Bug 1092178] RPM pre-uninstall scriptlet should not touch running services <https://bugzilla.redhat.co​m/show_bug.cgi?id=1092178>
22:57 theron_ joined #gluster
23:00 LessSeen joined #gluster
23:04 Durzo has anyone got any experience using the BD xlater in replica mode (with LVM)?
23:05 LessSeen joined #gluster
23:07 MrAbaddon joined #gluster
23:08 LessSeen_ joined #gluster
23:13 LessSeen joined #gluster
23:15 ninkotech joined #gluster
23:22 Matthaeus joined #gluster
23:30 MrAbaddon joined #gluster
23:34 gmcwhistler joined #gluster
23:37 glusterbot New news from resolvedglusterbugs: [Bug 965266] Create non-recursive not supported (HBASE) <https://bugzilla.redhat.com/show_bug.cgi?id=965266>
23:38 Matthaeus joined #gluster
23:45 gmcwhistler joined #gluster
23:52 mlessard joined #gluster
23:53 JoseBravo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary