Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 mjrosenb does anyone have any idea why that symbol would be missing? or what configure option i should turn off (or on) to make it not want that symbol?
00:10 mjrosenb /root/glusterfs-3.7.3-git/rpc/rpc-lib/src/rpc-clnt.c:1160: undefined reference to `xdr_auth_glusterfs_parms_v2'
00:27 mjrosenb oh, it looks like that file can't access common.h
00:27 mjrosenb err, can't find.
00:32 bowhunter joined #gluster
00:46 mjrosenb curiouser and curiouser... it looks like configure isn't detecting the OS as bsd?
00:46 mjrosenb manually specifing it was not a good idea.
00:46 mjrosenb well, at least it did not work.
00:50 mjrosenb GF_BSD_HOST_OS_FALSE='#'
00:50 mjrosenb GF_BSD_HOST_OS_TRUE=''
00:50 mjrosenb maybe that means it was detected correctly?
00:59 haomaiwa_ joined #gluster
01:00 chirino joined #gluster
01:22 bit4man joined #gluster
01:28 EinstCrazy joined #gluster
01:30 Lee1092 joined #gluster
01:32 EinstCrazy joined #gluster
01:33 NuxRo joined #gluster
01:42 jvandewege joined #gluster
01:46 russoisraeli joined #gluster
01:46 russoisraeli hello folks. I have a simple 2 node replicate volume. The volume mounts fine on Linux. I however, need to mount it from FreeBSD. There mount_glusterfs simply hangs, similarly to how it could hang when specifying a wrong node name to mount. Any ideas what might be required on FreeBSD?
01:51 mjrosenb I thought mounting on freebsd wasn't supported.
01:51 mjrosenb [2016-04-14 09:50:23.257751] W [MSGID: 101095] [xlator.c:194:xlator_dynload] 0-xlator: Cannot open "/usr/local/gluster-3.7.3/lib/glusterfs/3.7.3/xlator/mgmt/glusterd.so"
01:51 mjrosenb GAH.
01:52 russoisraeli I am not sure, but there is a 3.7.6 package available
01:53 mjrosenb hrmm
01:53 russoisraeli if mount is not supported, how else can the package be used? via the library only?
01:53 mjrosenb and the server
01:53 russoisraeli or as a node
01:53 russoisraeli right
01:53 mjrosenb I'm using it on freebsd just for the bricks
01:53 mjrosenb they never need to mount anything
01:54 mjrosenb I'm less than thrilled with gluster's support for multiple volumes :-/
01:54 bennyturns joined #gluster
01:55 russoisraeli other than that a question --- I have a 3 node replicate running on Gentoo Linux (all three nodes)
01:55 glusterbot russoisraeli: -'s karma is now -356
01:55 russoisraeli on that I have KVM images running
01:56 russoisraeli the guest KVM's are all FreeBSD
01:56 russoisraeli the I/O access from the KVM's is abysmal
01:56 russoisraeli very very slow
01:56 russoisraeli any obvious reason?
01:57 luizcpg joined #gluster
01:57 mjrosenb I am so confused... why is it looking for glusterd.so, it never even built that file
01:57 russoisraeli libvirt/KVM is using the gluster api
01:58 russoisraeli did you build it manually?
01:58 mjrosenb yes.
01:58 russoisraeli why not from packages or ports? any specific reason?
01:59 mjrosenb yes, I patched the sources to better support using multiple volumes (it is still pretty bad, but at least I can use it)
02:01 EinstCrazy joined #gluster
02:01 mjrosenb not only did it not build glusterd.so, it never built /any/ .so
02:07 julim joined #gluster
02:09 ira joined #gluster
02:16 blu_ joined #gluster
02:18 haomaiwang joined #gluster
02:24 mjrosenb closer... maybe:
02:24 mjrosenb ../../../py-compile: Missing argument to --destdir.
02:34 R4yTr4cer joined #gluster
02:42 mjrosenb oops, I accidentally removed my vol file from a brick
02:42 mjrosenb I should be able to just copy it from another brick, right?
02:53 kdhananjay joined #gluster
03:02 haomaiwang joined #gluster
03:03 nishanth joined #gluster
03:07 overclk joined #gluster
03:08 chirino_m joined #gluster
03:15 Gaurav_ joined #gluster
03:27 kotreshhr joined #gluster
03:33 atinm joined #gluster
03:40 shubhendu joined #gluster
03:47 kdhananjay joined #gluster
03:51 aravindavk joined #gluster
03:59 itisravi joined #gluster
04:01 haomaiwa_ joined #gluster
04:02 jiffin joined #gluster
04:20 sakshi joined #gluster
04:23 poornimag joined #gluster
04:24 Gnomethrower joined #gluster
04:26 ovaistariq joined #gluster
04:27 suliba joined #gluster
04:27 aspandey joined #gluster
04:29 cvstealt1 joined #gluster
04:31 Dasiel joined #gluster
04:31 saltsa joined #gluster
04:31 mjrosenb joined #gluster
04:31 dthrvr joined #gluster
04:32 unforgiven512 joined #gluster
04:32 tessier joined #gluster
04:32 tessier joined #gluster
04:32 kkeithley joined #gluster
04:34 masterzen joined #gluster
04:34 rwheeler joined #gluster
04:35 pocketprotector joined #gluster
04:36 devilspgd joined #gluster
04:36 [o__o] joined #gluster
04:40 primusinterpares joined #gluster
04:54 DV joined #gluster
04:55 bfoster joined #gluster
04:56 rafi joined #gluster
04:57 ndk joined #gluster
04:57 portante joined #gluster
05:01 haomaiwa_ joined #gluster
05:04 karthik___ joined #gluster
05:04 nbalacha joined #gluster
05:12 beeradb joined #gluster
05:15 ppai joined #gluster
05:16 kdhananjay joined #gluster
05:16 gowtham joined #gluster
05:19 jiffin joined #gluster
05:19 ndarshan joined #gluster
05:30 hgowtham joined #gluster
05:30 Apeksha joined #gluster
05:30 Wizek joined #gluster
05:30 Manikandan joined #gluster
05:30 Wizek_ joined #gluster
05:31 hchiramm joined #gluster
05:33 luizcpg joined #gluster
05:34 Bhaskarakiran joined #gluster
05:37 vmallika joined #gluster
05:46 ashiq joined #gluster
05:50 oxae joined #gluster
05:51 tswartz joined #gluster
05:52 prasanth joined #gluster
05:52 atalur joined #gluster
06:00 worzieznc joined #gluster
06:01 Wizek joined #gluster
06:01 haomaiwa_ joined #gluster
06:01 nishanth joined #gluster
06:01 lanning_ joined #gluster
06:02 marbu joined #gluster
06:02 robb_nl joined #gluster
06:02 itisravi_ joined #gluster
06:03 shubhendu_ joined #gluster
06:03 Bryce_ joined #gluster
06:03 decay_ joined #gluster
06:04 gowtham_ joined #gluster
06:05 edong23 joined #gluster
06:05 BogdanR_ joined #gluster
06:06 rideh joined #gluster
06:06 rastar joined #gluster
06:06 pdrakewe_ joined #gluster
06:07 sankarshan_away joined #gluster
06:07 siel_ joined #gluster
06:09 frakt joined #gluster
06:11 sadbox joined #gluster
06:12 tom[] joined #gluster
06:15 ashiq joined #gluster
06:16 mdavidson joined #gluster
06:16 scubacuda joined #gluster
06:16 anoopcs joined #gluster
06:18 uebera|| joined #gluster
06:19 mhulsman joined #gluster
06:20 ctria joined #gluster
06:22 jtux joined #gluster
06:27 Saravanakmr joined #gluster
06:28 spalai joined #gluster
06:31 mowntan joined #gluster
06:32 mowntan joined #gluster
06:34 hchiramm joined #gluster
06:35 R4yTr4cer joined #gluster
06:45 jri joined #gluster
06:46 portante joined #gluster
06:46 [Enrico] joined #gluster
06:49 SOLDIERz joined #gluster
06:50 Chr1st1an joined #gluster
06:54 jotun joined #gluster
07:01 haomaiwa_ joined #gluster
07:03 al joined #gluster
07:04 pur_ joined #gluster
07:04 Guest89761 joined #gluster
07:04 tyler274 joined #gluster
07:06 hackman joined #gluster
07:10 cholcombe joined #gluster
07:16 ggarg joined #gluster
07:16 jwd joined #gluster
07:17 [diablo] joined #gluster
07:18 ivan_rossi joined #gluster
07:25 sage joined #gluster
07:32 fsimonce joined #gluster
07:35 SOLDIERz joined #gluster
07:35 SOLDIERz joined #gluster
07:39 deniszh joined #gluster
07:41 wnlx joined #gluster
07:47 jermudgeon joined #gluster
07:53 Debloper joined #gluster
08:01 haomaiw__ joined #gluster
08:05 d4n13L joined #gluster
08:07 ahino joined #gluster
08:08 steveeJ joined #gluster
08:12 Slashman joined #gluster
08:14 semiosis joined #gluster
08:14 semiosis joined #gluster
08:16 aravindavk joined #gluster
08:19 karnan joined #gluster
08:26 nangthang joined #gluster
08:30 wnlx joined #gluster
08:49 tswartz joined #gluster
08:49 wnlx joined #gluster
08:55 tswartz joined #gluster
08:58 kovshenin joined #gluster
09:01 tessier joined #gluster
09:01 haomaiwa_ joined #gluster
09:05 R4yTr4cer joined #gluster
09:06 wnlx joined #gluster
09:06 unforgiven512 joined #gluster
09:07 unforgiven512 joined #gluster
09:10 ctria joined #gluster
09:12 Manikandan joined #gluster
09:16 kovshenin joined #gluster
09:17 jwaibel joined #gluster
09:19 goretoxo joined #gluster
09:30 cpetersen joined #gluster
09:32 btpier joined #gluster
09:33 Wizek_ joined #gluster
09:33 Wizek joined #gluster
09:34 tswartz joined #gluster
09:34 edong23 joined #gluster
09:35 d0nn1e joined #gluster
09:35 tessier joined #gluster
09:35 tessier joined #gluster
09:35 gnulnx joined #gluster
09:37 ashiq joined #gluster
09:40 fsimonce joined #gluster
09:41 R4yTr4cer joined #gluster
09:41 spalai joined #gluster
09:41 wnlx joined #gluster
09:45 ggarg joined #gluster
09:46 tyler274 joined #gluster
09:48 Apeksha joined #gluster
09:51 mdavidson joined #gluster
09:51 julim joined #gluster
10:01 ndevos puiterwijk, meet jiffin
10:01 ndevos jiffin, meet puiterwijk :)
10:01 haomaiwa_ joined #gluster
10:03 puiterwijk .fas jiffin
10:04 okajimmy joined #gluster
10:04 ndevos hah, no zodbot here :)
10:04 puiterwijk ndevos: hah. I always assume he's everywhere :)
10:05 puiterwijk jiffin: I would like to confirm that you have a RHBZ account with the same email as your email address in FAS.
10:05 Wizek joined #gluster
10:05 ira joined #gluster
10:07 R4yTr4cer joined #gluster
10:17 jiffin puiterwijk: Hi
10:18 jiffin puiterwijk: yes it is
10:19 R4yTr4cer joined #gluster
10:24 kkeithley1 joined #gluster
10:25 Debloper joined #gluster
10:28 ashiq joined #gluster
10:31 goretoxo joined #gluster
10:33 wnlx joined #gluster
10:38 rastar joined #gluster
10:40 okajimmy joined #gluster
10:53 dlambrig_ joined #gluster
10:54 bfoster joined #gluster
10:54 Gnomethrower joined #gluster
10:55 anil joined #gluster
10:58 arcolife joined #gluster
11:01 haomaiw__ joined #gluster
11:05 Wizek joined #gluster
11:07 shyam joined #gluster
11:11 Wizek__ joined #gluster
11:15 Wizek joined #gluster
11:15 Wizek_ joined #gluster
11:17 prasanth joined #gluster
11:24 Wizek__ joined #gluster
11:27 volga629_ joined #gluster
11:27 johnmilton joined #gluster
11:28 Wizek joined #gluster
11:31 bennyturns joined #gluster
11:32 nbalacha joined #gluster
11:33 ctria joined #gluster
11:35 puiterwijk jiffin: okay, great. could you give me your FAS username once more?
11:37 foster__ joined #gluster
11:41 BitByteNybble110 joined #gluster
11:49 jiffin puiterwijk: jiffintt
11:50 puiterwijk That... is a different one than the one ndevos gave me.
11:50 puiterwijk ndevos: ^
11:50 ndevos jiffin: hmm, and I thought it was thotz just like on GitHub?
11:53 Apeksha joined #gluster
11:59 Apeksha_ joined #gluster
12:00 Wizek_ joined #gluster
12:02 jiffin ndevos: i have multiple names :) , "thotz" was my favorite during my college days
12:03 ndevos jiffin: ah, its confusing :) and "thotz" seems to exist in the Fedora Account System too
12:03 jiffin ndevos: u can call me which is comfort for you
12:03 ndevos J.
12:03 kkeithley_ just don't call you late for supper
12:03 jiffin ndevos: maybe i created it using gmail account
12:03 Wizek joined #gluster
12:04 post-fac1um joined #gluster
12:05 bfoster1 joined #gluster
12:07 portante joined #gluster
12:07 jiffin ndevos: its me only and i created using gmail (but i don't remember when)
12:08 ndevos jiffin: ah, ok, maybe you can find a way to remove one of the accounts? I guess puiterwijk could point you to some docs or such
12:09 chirino joined #gluster
12:10 jri joined #gluster
12:11 puiterwijk We cannot remove FAS accounts.
12:11 crashmag joined #gluster
12:11 cvstealth joined #gluster
12:13 pdrakeweb joined #gluster
12:13 jbrooks joined #gluster
12:13 mdavidson joined #gluster
12:14 jiffin puiterwijk: hmm
12:15 xavih joined #gluster
12:17 Ax joined #gluster
12:18 puiterwijk jiffin: but there should be no harm just leaving the account there, not?
12:19 ghenry joined #gluster
12:20 sac joined #gluster
12:20 poornimag joined #gluster
12:21 Guest28927 Hello, new to gluster i would appreciate some help... I've installed a couple weeks ago a two node cluster (one replicated brick +/- 500GB and 5 gluster clients) on debian 7. Now I realize that the version is 3.2.7 --> old I would like to upgrade to at least 3.5 or better 3.7 but I don't find any documentation/procedure, anyone could help.
12:21 ctria joined #gluster
12:26 jiffin puiterwijk: right
12:29 post-factum joined #gluster
12:33 cyberbootje joined #gluster
12:38 Slashman joined #gluster
12:39 mhulsman joined #gluster
12:39 wnlx joined #gluster
12:44 russoisraeli joined #gluster
12:46 social joined #gluster
12:47 hchiramm joined #gluster
12:49 unclemarc joined #gluster
12:56 Guest90107 joined #gluster
12:57 shyam joined #gluster
12:58 wnlx joined #gluster
12:58 jiffin joined #gluster
12:59 mpietersen joined #gluster
13:01 ic0n joined #gluster
13:03 cpetersen joined #gluster
13:05 dlambrig_ joined #gluster
13:13 kbyrne joined #gluster
13:14 Wizek__ joined #gluster
13:16 mjrosenb ok, looks like I needed make install -k
13:16 mjrosenb but I feel like that is /very/ wrong
13:17 BogdanR joined #gluster
13:17 Dasiel joined #gluster
13:18 Romeor joined #gluster
13:18 masterzen joined #gluster
13:18 lanning joined #gluster
13:19 papamoose joined #gluster
13:19 chirino joined #gluster
13:19 Nebraskka joined #gluster
13:20 bennyturns joined #gluster
13:21 rwheeler joined #gluster
13:21 karthik___ joined #gluster
13:21 bwerthmann joined #gluster
13:22 rafi1 joined #gluster
13:22 ira joined #gluster
13:23 7GHAARTIV joined #gluster
13:25 gnulnx joined #gluster
13:26 gbox joined #gluster
13:26 nbalacha joined #gluster
13:40 nigelb joined #gluster
13:42 kotreshhr joined #gluster
13:43 JoeJulian mjrosenb: what's wrong with gluster's support for multiple volumes?
13:44 mjrosenb JoeJulian: it falls over hard when it tries to make a hardlink between two volumes.
13:45 post-fac1um joined #gluster
13:45 bwerthma1n joined #gluster
13:46 JoeJulian Whah? Why would you ever want that?
13:46 pdrakewe_ joined #gluster
13:47 Ax joined #gluster
13:47 JoeJulian A volume is a complete posix filesystem. You cannot hardlink between filesystems. I don't see why gluster should be different.
13:48 jvandewege joined #gluster
13:49 edong23 joined #gluster
13:49 mbukatov joined #gluster
13:49 JoeJulian Guest2332: It's going to require down time. Stop your volumes, stop glusterfs-server. kill anything with "gluster" in the process name (pkill -f gluster). Upgrade and start glusterfs-server.
13:50 Wizek joined #gluster
13:50 Wizek_ joined #gluster
13:50 gbox joined #gluster
13:55 farhorizon joined #gluster
13:57 Slashman joined #gluster
14:00 rafi joined #gluster
14:00 jwd joined #gluster
14:01 haomaiwa_ joined #gluster
14:01 dthrvr joined #gluster
14:01 unclemarc joined #gluster
14:02 post-factum joined #gluster
14:02 atalur joined #gluster
14:03 ivan_rossi joined #gluster
14:03 mdavidson joined #gluster
14:03 rastar joined #gluster
14:03 ira joined #gluster
14:03 gnulnx joined #gluster
14:04 karnan joined #gluster
14:06 nishanth joined #gluster
14:07 hchiramm joined #gluster
14:09 nbalacha joined #gluster
14:14 gluster joined #gluster
14:18 mjrosenb JoeJulian: I want it mostly because that is how my system is set up, and it /used/ to work (gluster-2 days), and changing it at this point would be annoying.
14:21 kotreshhr joined #gluster
14:21 bluenemo joined #gluster
14:22 JoeJulian The entire way hardlinks are handled was changed around 3.4 iirc because hardlinks were completely broken. It was super easy to break a hardlink. Self-heals and rebalance would even break them. That's when someone got the bright idea of creating files based on the psuedo inode number (gfid) and make files hardlinks of those inode identifiers. I'm not even sure that you *can* make that work across volumes unless you could somehow pre-populate the
14:22 JoeJulian gfid files.
14:22 arcolife joined #gluster
14:25 mowntan joined #gluster
14:26 jri joined #gluster
14:27 bowhunter joined #gluster
14:28 DV__ joined #gluster
14:32 farhorizon joined #gluster
14:33 jwd joined #gluster
14:34 jwaibel joined #gluster
14:40 mjrosenb well, I replaced all of the hardlinks with symlinks.
14:44 dbruhn joined #gluster
14:46 * mjrosenb wonders, would it be possible to just export every filesystem as its own volume, and just have clients auto-assemble them?
14:52 ahino joined #gluster
14:53 Apeksha joined #gluster
14:56 Hesulan joined #gluster
14:59 shyam joined #gluster
15:00 jwd joined #gluster
15:01 haomaiw__ joined #gluster
15:04 jwaibel joined #gluster
15:06 dataio joined #gluster
15:06 bit4man joined #gluster
15:07 bennyturns joined #gluster
15:07 EinstCrazy joined #gluster
15:08 ackjewt joined #gluster
15:10 kpease joined #gluster
15:10 shaunm joined #gluster
15:13 blu_ joined #gluster
15:26 wnlx joined #gluster
15:28 atalur joined #gluster
15:43 skylar joined #gluster
15:53 Pupeno joined #gluster
16:01 dgandhi joined #gluster
16:01 bwerthmann joined #gluster
16:04 farhorizon joined #gluster
16:06 fubada JoeJulian: are you familiar with rsync-options --bwlimit? Seems to kill geo rep as soon as I enable that
16:06 fubada files get created on the receiving end, but 0 bytes
16:06 fubada if I disable bwlimit, replication resumes
16:06 fubada 3.7
16:19 nangthang joined #gluster
16:21 ppai joined #gluster
16:21 JoeJulian mjrosenb: Sure, it's call a distribute volume. ;)
16:25 Pupeno joined #gluster
16:28 mjrosenb c.c I thoght I /was/ using a distribute volume
16:32 JoeJulian "export every filesystem as its own volume" = "one brick per filesystem", "have the clients auto-assemble them" = "distribute translator". It's all done in one single volume though.
16:33 fubada anyone have experience activating rsync-options="--bwlimit=XX"?
16:33 glusterbot fubada: rsync-options="'s karma is now -1
16:34 m0zes joined #gluster
16:36 jlp1 joined #gluster
16:38 Pupeno joined #gluster
16:39 hackman joined #gluster
16:41 spalai joined #gluster
16:42 spalai1 joined #gluster
16:45 jwd joined #gluster
16:47 kbyrne joined #gluster
16:47 rastar joined #gluster
16:50 jwd joined #gluster
17:01 Pupeno joined #gluster
17:02 shubhendu_ joined #gluster
17:02 ninkotech joined #gluster
17:02 ninkotech_ joined #gluster
17:05 Pupeno joined #gluster
17:07 dlambrig_ joined #gluster
17:09 aspandey joined #gluster
17:09 gem joined #gluster
17:11 Pupeno joined #gluster
17:14 ashiq joined #gluster
17:18 hagarth joined #gluster
17:21 papamoose1 joined #gluster
17:21 farhorizon joined #gluster
17:22 BogdanR joined #gluster
17:25 kotreshhr joined #gluster
17:25 bwerthmann joined #gluster
17:26 crashmag joined #gluster
17:26 Pupeno joined #gluster
17:27 kotreshhr left #gluster
17:27 ivan_rossi left #gluster
17:27 farhorizon joined #gluster
17:29 BitByteNybble110 joined #gluster
17:33 skylar joined #gluster
17:33 ashiq joined #gluster
17:33 rastar joined #gluster
17:33 unclemarc joined #gluster
17:34 gem joined #gluster
17:35 Slashman joined #gluster
17:35 shyam joined #gluster
17:36 wnlx joined #gluster
17:38 Pupeno joined #gluster
17:38 jlp1 joined #gluster
17:38 kbyrne joined #gluster
17:39 fubada FYI if anyone is having issues with geo-replication rsync_options
17:39 fubada no need to use double quotes around the actual options as printed in the docs
17:39 bfoster joined #gluster
17:42 JoeJulian fubada: can I get a link to said docs?
17:43 bwerthmann joined #gluster
17:43 Pupeno joined #gluster
17:46 oxae joined #gluster
17:47 Wizek__ joined #gluster
17:48 kpease_ joined #gluster
17:49 fubada JoeJulian: documents like this https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/sect-Starting_Geo-replication.html led me to believe I needed to use double quotes around rsync options
17:49 glusterbot Title: 14.4. Starting Geo-replication (at access.redhat.com)
17:49 fubada search for 'bwlimit' on that page
17:49 rafi joined #gluster
17:49 fubada i may have misunderstood the documentation
17:49 JoeJulian Ah, well pooh. I can't do anything about Red Hat documentation.
17:50 portante_ joined #gluster
17:50 papamoose1 left #gluster
17:51 14WAARCEY joined #gluster
17:52 hchiramm joined #gluster
17:52 gem joined #gluster
17:52 edong23 joined #gluster
17:52 Dasiel joined #gluster
17:56 mhulsman joined #gluster
17:56 squizzi joined #gluster
17:57 squizzi joined #gluster
17:59 deniszh joined #gluster
17:59 tru_tru joined #gluster
18:00 cliluw joined #gluster
18:03 ahino joined #gluster
18:06 rafi joined #gluster
18:10 jwd joined #gluster
18:17 karnan joined #gluster
18:27 deniszh joined #gluster
18:37 luizcpg joined #gluster
18:37 luizcpg left #gluster
18:38 nage joined #gluster
18:39 kovshenin joined #gluster
18:41 Pupeno joined #gluster
18:42 Pupeno_ joined #gluster
18:42 ahino joined #gluster
18:52 karnan joined #gluster
18:56 robb_nl joined #gluster
18:58 hagarth joined #gluster
18:59 jiffin joined #gluster
18:59 bwerthmann joined #gluster
19:00 rafi joined #gluster
19:08 shubhendu_ joined #gluster
19:10 Pupeno joined #gluster
19:10 rafi joined #gluster
19:12 nishanth joined #gluster
19:15 Pupeno joined #gluster
19:16 dlambrig_ joined #gluster
19:31 dlambrig_ joined #gluster
19:32 rafi joined #gluster
19:38 shyam joined #gluster
19:50 chirino joined #gluster
19:50 jwd joined #gluster
19:52 haomaiwang joined #gluster
20:11 DV joined #gluster
20:12 nage joined #gluster
20:19 chirino_m joined #gluster
20:22 kknoppf5 joined #gluster
20:24 kknoppf5 anyone got time for a quick question about a stuck rebalance operation?
20:29 kknoppf5 safest way to stop it?
20:29 JoeJulian theoretically rebalance...stop
20:29 farhorizon joined #gluster
20:29 JoeJulian What version?
20:30 kknoppf5 gluster volume xxx rebalance stop already performed but the process has been hanging out for days
20:30 kknoppf5 3.7.6-1.el7
20:30 JoeJulian Ah
20:31 JoeJulian I honestly don't know if there's a one-liner answer. I haven't encountered it myself, so I haven't had an opportunity to come up with a workaround.
20:32 kknoppf5 hasn't left any log output for days either, but process is using %50 cpu or so consistently... process doesn't seem to have any files open besides pidfile and log, nor socket connections
20:32 kknoppf5 risks of killing the process?
20:33 JoeJulian Should be none.
20:33 JoeJulian The only issue might be that the volume will continue to show a rebalance in progress.
20:34 JoeJulian The only way I've seen this solved in the past was to stop all glusterd, edit the info file, and start glusterd again.
20:34 JoeJulian That can be done without interrupting existing volume use.
20:34 JoeJulian (new mounts will fail until glusterd is started again)
20:35 kknoppf5 interesting... thanks for the suggestion
20:35 farhorizon joined #gluster
20:38 dlambrig_ joined #gluster
20:44 suliba joined #gluster
20:44 Gaurav_ joined #gluster
20:45 farhoriz_ joined #gluster
20:48 hchiramm joined #gluster
20:49 natgeorg joined #gluster
20:51 kknoppf5 so that state file must be /var/lib/glusterd/vols/xxx/node_state.info?
20:54 kknoppf5 is there an authoritative source for its various settings?
20:54 JoeJulian The last time I looked at this (several versions back) is was /var/lib/glusterd/vols/xxx/info
20:57 jwd joined #gluster
20:58 kknoppf5 node_state.info has status, rebalance_status, rebalance_op, and rebalance_id
21:00 jwd joined #gluster
21:01 kknoppf5 status=3 on all nodes except for one of the two in the redundant pair on which the rebalance was copying FROM
21:02 kknoppf5 status=1 on that node
21:06 jwaibel joined #gluster
21:09 JoeJulian iirc, 1 is what you want. Anything with "rebalance" can be removed. (make backups of course)
21:23 Wizek_ joined #gluster
21:26 bit4man joined #gluster
21:26 Wizek__ joined #gluster
21:28 kknoppf5 seems like it must have changed in later versions, info has status=1 across all servers and volumes, whereas node_status.info has varying status=0/1/3/4 and also includes rebalance_status=0/1/4, rebalance_op=0/19
21:41 haomaiwa_ joined #gluster
21:42 JoeJulian 3.7.10 status=1 for my online volume. 2 for an offline one.
21:51 Pupeno joined #gluster
21:54 gnulnx joined #gluster
21:54 dlambrig__ joined #gluster
21:55 ahino joined #gluster
21:55 dgandhi joined #gluster
21:57 lanning joined #gluster
21:58 dgandhi joined #gluster
21:59 pdrakeweb joined #gluster
22:00 jvandewege_ joined #gluster
22:01 jwd joined #gluster
22:07 DV joined #gluster
22:11 johnmilton joined #gluster
22:18 bwerthmann joined #gluster
22:23 bfoster joined #gluster
22:25 natgeorg joined #gluster
22:37 nage joined #gluster
22:37 chirino joined #gluster
22:42 haomaiwang joined #gluster
22:56 deniszh1 joined #gluster
22:56 kknoppf5 looks like a normal status for node_state.info would be:
22:57 kknoppf5 rebalance_status=0
22:57 kknoppf5 status=0
22:57 kknoppf5 rebalance_op=0
22:57 kknoppf5 rebalance-id=00000000-0000-0000-0000-000000000000
22:57 johnmilton joined #gluster
23:00 DV joined #gluster
23:13 russoisraeli joined #gluster
23:15 caitnop joined #gluster
23:16 cliluw Is there any way to change the default options that a newly created volume starts off with?
23:16 cliluw I want all my volumes to start with nfs.disable set to on.
23:29 kknoppf5 joined #gluster
23:30 kknoppf5 also, etc-glusterfs-glusterd.vol.log has a lot of repeated "unable to acquire lock for xxx" and "unable to release lock for xxx"
23:31 kknoppf5 if i was to stop glusterd, replace node_state.info with the correct settings, and restart glusterd, i'm assuming i can do that one node at a time to minimize disruption?
23:45 JoeJulian kknoppf5: You could not. They'll have to match or glusterd will not start.
23:46 JoeJulian However, if you only stop glusterd and not glusterfsd, the bricks will keep running and there will be no disruption.
23:46 JoeJulian (except for any new clients trying to mount while glusterd is down)
23:47 kknoppf5 ah thanks, so that means any currently existing discrepancies in that file across nodes will actually be a problem in the future
23:49 Hesulan joined #gluster
23:50 JoeJulian yes
23:50 JoeJulian cliluw: nope. Would make a good feature request, to add a "default" group. file a bug
23:50 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary