Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:29 bennyturns joined #gluster
00:35 awheeler joined #gluster
00:38 andreask joined #gluster
00:40 jag3773 joined #gluster
00:44 rfortier joined #gluster
00:59 mjrosenb ok, I just transitioned from linux to freebsd by making a zfs filesystem in zfs, then putting in a new root drive, and importing the linux-created zpool in freebsd.
00:59 mjrosenb unfortunately, I forgot to copy the brick configuration over
01:00 mjrosenb is this stored on the config-file-server anywhere?
01:03 bala joined #gluster
01:16 kevein joined #gluster
01:17 mjrosenb JoeJulian: you usually know what is going on.  Should I boot this system into linux to recover the config files, or should this not be an issue, and I should just create a new configuration?
02:27 davinder joined #gluster
02:28 ajha joined #gluster
02:42 vshankar joined #gluster
02:43 bstr_ joined #gluster
02:43 harish joined #gluster
02:48 saurabh joined #gluster
02:58 awheeler joined #gluster
03:00 rfortier joined #gluster
03:02 awheeler joined #gluster
03:13 bharata-rao joined #gluster
03:17 timothy joined #gluster
03:20 kshlm joined #gluster
03:22 shubhendu joined #gluster
03:34 awheeler joined #gluster
03:36 mjrosenb wow, tonight is slow, even for #gluster
03:45 sgowda joined #gluster
03:49 itisravi joined #gluster
03:54 ppai joined #gluster
04:06 fkautz mjrosenb: it really is, every channel i'm in is devoid of conversation save this one
04:07 mjrosenb fkautz: glad I can brighten your evening/morning.
04:07 kanagaraj joined #gluster
04:07 fkautz the pleasure is mine :)
04:08 mjrosenb fkautz: you happen to have any idea about my predicament earlier?
04:09 fkautz mmm, i won't be of much help on that problem :( sorry
04:10 fkautz if the data is still available, could you mount and recover it without rebooting?
04:14 hagarth mjrosenb: all configuration is persisted on /var/lib/glusterd
04:15 mjrosenb fkautz: nope, freebsd doesn't know how to speak to that filesystem.
04:15 mjrosenb hagarth: for all bricks?
04:15 hagarth mjrosenb: yeah .. on all servers.
04:18 mjrosenb hagarth: cool, so I should be able to copy that directory to the new OS from another brick, then start up glusterd like normal, and be back inbusiness?
04:19 hagarth mjrosenb: ideally yes
04:20 hagarth hang on
04:20 hagarth do you have a backup of /var/lib/glusterd on this node?
04:21 mjrosenb hagarth: there are two nodes, memorybeta and memoryalpha.  memoryalpha is 100% in tact, memorybeta is a fresh freebsd install.  accessing any data from the old memorybeta install is annoying.
04:21 awheeler joined #gluster
04:23 hagarth /var/lib/glusterd contains unique data too (like glusterd's UUID, peers etc.)
04:23 hagarth if you copy that from memoryalpha, then it will not be correct.
04:24 mjrosenb ahh.  bummer.
04:24 hagarth mjrosenb: you can possibly copy over /var/lib/glusterd and carefully perform modifications
04:25 davinder2 joined #gluster
04:25 mjrosenb hagarth: what modifications would I need to make?
04:26 mjrosenb otherwise, I can just boot into a livecd, put the 12 disks back in, and copy /var/lib/glusterd somewhere else :-/
04:26 hagarth mjrosenb: you would need to modify glusterd.info and peers directory
04:26 hagarth do you have only 2 nodes in the cluster?
04:26 ababu joined #gluster
04:27 mjrosenb hagarth: yes.
04:28 hagarth /var/lib/glusterd/peers/<uuid> in memoryalpha would be the UUID of glusterd in memorybeta
04:28 meghanam joined #gluster
04:28 hagarth /glusterd/previous glusterd/
04:28 meghanam_ joined #gluster
04:29 hagarth mjrosenb: you can edit /var/lib/glusterd/glusterd.info on memorybeta to contain that UUID
04:30 hagarth mjrosenb: again on memorybeta, you would need to create a file in /var/lib/glusterd/peers with the file name being the UUID on memoryalpha
04:31 kPb_in_ joined #gluster
04:31 hagarth and then edit that file to contain hostname and uuid of memoryalpha
04:32 hagarth mjrosenb: I got to run now. Will be back in a bit.
04:33 mohankumar joined #gluster
04:34 mjrosenb hagarth: what is 'state' in the peers file?
04:36 hagarth mjrosenb: you can leave it at 3
04:44 itisravi joined #gluster
04:54 an joined #gluster
04:54 bala joined #gluster
04:54 raghu joined #gluster
05:01 rjoseph joined #gluster
05:01 davinder joined #gluster
05:03 CheRi joined #gluster
05:14 psharma joined #gluster
05:15 sgowda joined #gluster
05:16 timothy joined #gluster
05:16 bulde joined #gluster
05:16 shylesh joined #gluster
05:22 vpshastry joined #gluster
05:22 lalatenduM joined #gluster
05:26 shruti joined #gluster
05:33 shubhendu joined #gluster
05:40 mohankumar joined #gluster
05:42 jporterfield joined #gluster
05:42 micu2 joined #gluster
05:43 RameshN joined #gluster
05:45 satheesh1 joined #gluster
05:53 jporterfield joined #gluster
05:55 hagarth joined #gluster
05:57 an joined #gluster
06:00 mjrosenb so, the new brick says it is up
06:00 mjrosenb but clients are acting strange.
06:01 satheesh1 joined #gluster
06:01 nshaikh joined #gluster
06:02 anands joined #gluster
06:03 bharata-rao joined #gluster
06:08 meghanam_ joined #gluster
06:10 meghanam joined #gluster
06:11 ctria joined #gluster
06:11 an joined #gluster
06:12 meghanam_ joined #gluster
06:19 jporterfield joined #gluster
06:24 jtux joined #gluster
06:24 wgao joined #gluster
06:26 StarBeast joined #gluster
06:26 kshlm joined #gluster
06:31 psharma joined #gluster
06:31 Sonicos joined #gluster
06:34 meghanam joined #gluster
06:34 meghanam_ joined #gluster
06:37 harish joined #gluster
06:42 dusmant joined #gluster
06:44 mjrosenb ok, so clients are wonky until they get remounted
06:45 mjrosenb I feel like this should not be necessary.
06:45 mjrosenb hagarth: back yet?
06:46 kPb_in joined #gluster
06:51 ngoswami joined #gluster
06:56 aravindavk joined #gluster
06:59 eseyman joined #gluster
07:01 CheRi joined #gluster
07:07 puebele joined #gluster
07:07 davinder joined #gluster
07:09 ctria joined #gluster
07:10 satheesh1 joined #gluster
07:15 aib_007 joined #gluster
07:20 hybrid5121 joined #gluster
07:22 harish joined #gluster
07:25 verdurin joined #gluster
07:27 puebele1 joined #gluster
07:35 mohankumar joined #gluster
07:36 StarBeast joined #gluster
07:43 CheRi joined #gluster
07:52 _risibusy joined #gluster
07:57 andreask joined #gluster
08:03 rjoseph joined #gluster
08:21 chirino joined #gluster
08:22 harish joined #gluster
08:23 jporterfield joined #gluster
08:23 satheesh1 joined #gluster
08:26 puebele1 joined #gluster
08:31 JuanBre joined #gluster
08:32 bharata-rao joined #gluster
08:34 manik joined #gluster
08:35 raghu joined #gluster
08:35 mbukatov joined #gluster
08:39 shubhendu joined #gluster
08:45 ricky-ticky joined #gluster
08:48 jporterfield joined #gluster
08:50 StarBeast joined #gluster
08:56 jre1234 joined #gluster
08:56 puebele1 joined #gluster
08:56 shylesh joined #gluster
09:00 tryggvil joined #gluster
09:03 eseyman joined #gluster
09:06 mooperd_ joined #gluster
09:07 andreask joined #gluster
09:11 mgebbe_ joined #gluster
09:15 puebele1 joined #gluster
09:16 kanagaraj_ joined #gluster
09:17 manik joined #gluster
09:21 rjoseph joined #gluster
09:21 shubhendu joined #gluster
09:24 sgowda joined #gluster
09:24 jporterfield joined #gluster
09:36 glusterbot New news from newglusterbugs: [Bug 1002907] changelog binary parser not working <http://goo.gl/UB57mL>
09:40 rjoseph joined #gluster
09:46 saurabh joined #gluster
09:55 jporterfield joined #gluster
10:04 ricky-ticky joined #gluster
10:06 glusterbot New news from newglusterbugs: [Bug 1002940] change in changelog-encoding <http://goo.gl/dmQAcW> || [Bug 1007346] gluster 3.4 write <http://goo.gl/nSZnZ0>
10:14 an joined #gluster
10:16 dusmant joined #gluster
10:25 shruti joined #gluster
10:26 kPb_in joined #gluster
10:26 jporterfield joined #gluster
10:30 neofob joined #gluster
10:30 shubhendu joined #gluster
10:41 kkeithley1 joined #gluster
10:44 ricky-ticky joined #gluster
10:49 sprachgenerator joined #gluster
10:56 jtux joined #gluster
11:01 jporterfield joined #gluster
11:09 kanagaraj joined #gluster
11:11 andreask joined #gluster
11:11 sprachgenerator joined #gluster
11:12 ababu joined #gluster
11:15 saurabh joined #gluster
11:15 an joined #gluster
11:20 ppai joined #gluster
11:20 CheRi joined #gluster
11:22 kanagaraj joined #gluster
11:26 shubhendu joined #gluster
11:26 dusmant joined #gluster
11:29 shruti joined #gluster
11:32 edward2 joined #gluster
11:33 bulde joined #gluster
11:34 jclift joined #gluster
11:36 \_pol joined #gluster
11:37 sgowda joined #gluster
11:39 kanagaraj joined #gluster
11:45 tziOm joined #gluster
11:50 hagarth joined #gluster
11:54 hybrid5121 joined #gluster
11:56 TomKa joined #gluster
11:56 kanagaraj joined #gluster
11:57 lalatenduM joined #gluster
11:59 ndarshan joined #gluster
12:01 diegows_ joined #gluster
12:08 hybrid5121 joined #gluster
12:10 kanagaraj_ joined #gluster
12:14 kanagaraj joined #gluster
12:14 kanagaraj joined #gluster
12:16 CheRi joined #gluster
12:16 failshell joined #gluster
12:26 premera joined #gluster
12:28 vpshastry left #gluster
12:30 RedShift joined #gluster
12:36 andreask joined #gluster
12:51 bennyturns joined #gluster
12:53 awheeler joined #gluster
12:54 awheeler joined #gluster
12:55 rjoseph joined #gluster
13:04 lalatenduM joined #gluster
13:06 jporterfield joined #gluster
13:16 shubhendu joined #gluster
13:17 bulde joined #gluster
13:19 kanagaraj joined #gluster
13:22 kanagaraj joined #gluster
13:30 lpabon joined #gluster
13:41 bala joined #gluster
13:43 kanagaraj joined #gluster
13:43 rcheleguini joined #gluster
13:46 kPb_in joined #gluster
13:49 shubhendu joined #gluster
13:50 lpabon joined #gluster
13:54 hagarth joined #gluster
14:00 rfortier joined #gluster
14:01 mohankumar joined #gluster
14:07 ctria joined #gluster
14:09 jag3773 joined #gluster
14:13 Sonicos joined #gluster
14:25 Technicool joined #gluster
14:33 awheeler_ joined #gluster
14:34 haritsu joined #gluster
14:34 bdeb4 joined #gluster
14:35 ndk joined #gluster
14:35 stickyboy Got a hard drive down in a hardware RAID in a replica.
14:36 stickyboy Gonna go replace it... crossing my fingers the the clients don't notice!
14:39 samsamm joined #gluster
14:41 mjrosenb stickyboy: my impression is they always notice :-(
14:57 a2_ joined #gluster
14:57 avati joined #gluster
15:03 eseyman joined #gluster
15:07 ndk` joined #gluster
15:08 jcsp joined #gluster
15:13 atrius joined #gluster
15:25 l4v joined #gluster
15:35 zerick joined #gluster
15:36 gkleiman joined #gluster
15:38 genghi1 joined #gluster
15:39 genghi1 hi…  in gluster version 3.2.5, is it possible to change a brick type from replicated to distributed on-the-fly?
15:41 genghi1 or… if I have currently have a two brick replicated system, if I add two more does that automatically make it replciated-distributed and also increase my available disk space?
15:41 sprachgenerator joined #gluster
15:42 genghi1 er… I meant volume type, of course
15:45 sprachgenerator_ joined #gluster
15:48 stickyboy mjrosenb: So far so good... my clients are all up and writing.  The replica is still rebuilding, dunno if I should bring it up yet or not.
15:48 rfortier joined #gluster
15:50 \_pol joined #gluster
15:52 manik joined #gluster
15:58 shylesh joined #gluster
16:07 glusterbot New news from newglusterbugs: [Bug 1007509] Add Brick Does Not Clear xttr's <http://goo.gl/Qx4F4w>
16:08 genghi1 or let me rephrase my question:  if I have a 4 brick replicated-distributed gluster config, will I get the advantages of replication and also increased disk space?
16:16 ndevos genghi1: you can do what you want with 3.3 and up, I do not know if/how that works with 3.2
16:18 Cenbe joined #gluster
16:22 genghi1 ndevos: you mean with replicate-distribute specifically?
16:25 bala joined #gluster
16:25 ctria joined #gluster
16:26 \_pol joined #gluster
16:27 vimal joined #gluster
16:27 timothy joined #gluster
16:28 zaitcev joined #gluster
16:29 \_pol_ joined #gluster
16:31 ndevos genghi1: you can move from a 2-brick replica to a 4-brick distribite-replica by commandline, something like: gluster volume add-brick $VOLUME replica 2 $NEW_BRICK_1 $NEW_BRICK_2
16:31 * stickyboy runs to the data center to check his RAID rebuild.
16:32 Mo__ joined #gluster
16:34 vpshastry joined #gluster
16:41 ababu joined #gluster
16:42 stickyboy Ok, hardware RAID rebuild is coming along SLOOOOOW.
16:42 stickyboy So I decided to bring my replica backup.
16:43 stickyboy s/backup/back up/
16:43 glusterbot What stickyboy meant to say was: So I decided to bring my replica back up.
16:43 anands joined #gluster
16:48 vpshastry1 joined #gluster
16:51 hagarth joined #gluster
16:52 LoudNoises joined #gluster
16:52 stickyboy Now trying:   gluster volume heal <volume> info
16:52 stickyboy And starting a heal:   gluster volume heal <volume>
16:53 stickyboy I see a few files going back and forth... eventually it should calm down I guess
16:58 stickyboy Yay I think it's done.
17:00 stickyboy Both replicas now say:    "Number of entries: 0"
17:00 stickyboy Yay
17:06 davinder joined #gluster
17:07 timothy joined #gluster
17:12 XpineX joined #gluster
17:18 mooperd_ joined #gluster
17:25 genghi1 ndevos: many thanks!
17:27 timothy joined #gluster
17:38 awheeler_ left #gluster
17:41 vpshastry1 left #gluster
17:52 vpshastry joined #gluster
17:52 timothy joined #gluster
18:05 vpshastry left #gluster
18:08 dusmant joined #gluster
18:09 hagarth joined #gluster
18:25 bcdonadio joined #gluster
18:25 haritsu joined #gluster
18:26 bcdonadio Since Ubuntu doesn't recognize the "_netdev" flag on fstab, what's the most reliable way to mount a GlusterFS volume at boot time?
18:26 mjrosenb bcdonadio: for me, it just gives a warning, then mounts it.
18:27 abassett left #gluster
18:27 bcdonadio mjrosenb, it says that the option is unknown, and, when booting, tries to mount before the network is setup (or fuse is loaded, i'm not sure)
18:28 bcdonadio but, when specifying the option manually, indeed, it throws an warning but does mount the volume
18:29 bcdonadio the real problem is that when it fails to mount the volume at boot time, it stops the boot process and stays there, waiting for an input
18:29 mjrosenb bcdonadio: oh, don't know about that.
18:30 tjstansell we ended up writing our own glusterfs-mount init script that would mount glusterfs filesystems exactly when we want and use the noauto option to keep the normal init scripts from trying to mount it.
18:32 bcdonadio tjstansell, where can I find this script?
18:33 tjstansell ours is for centos ... but i could put it on fpaste ...
18:33 bcdonadio tjstansell, oh, I thought it was part of the project
18:34 tjstansell nah. just for our own use...
18:35 tjstansell we actually didn't care for the inconsistency in most of the gluster init scripts from starting in 3.3.1 and have written our own to do exactly what we want...
18:37 bcdonadio tjstansell, what you did in this script? just put in the header to start the networking before executing it, or there's some fancy extra magic?
18:38 tjstansell in our world, we have it run at S21, which is after most of the network stuff has come up, but just before the S25netfs init script which is where it would mount the _netdev devices.
18:41 bcdonadio tjstansell, and you just put directly the mount commands in the scripts, or you made it parse another file (like fstab)?
18:42 tjstansell we modeled it after the netfs script, so it parses /etc/fstab looking for gluserfs mounts that don't start with #
18:43 bcdonadio won't the default mount script complain that it doesn't understand the _netdev flag, even if it's noauto?
18:43 tjstansell and similarly, when it runs stop, we parse /proc/mounts and then run the __umount_loop function to unmount it, killing processes along the way, etc
18:44 tjstansell we don't use _netdev since we have noauto in there.
18:44 tjstansell but the mount does complain about "unknown option noauto" so we just grep -v that out of the output so it doesn't show up ...
18:45 bcdonadio cool, I would love to see your script if you don't mind...
18:46 tjstansell sure... let me put it on fpaste...
18:46 haritsu_ joined #gluster
18:47 tjstansell http://fpaste.org/39171/
18:47 glusterbot Title: #39171 Fedora Project Pastebin (at fpaste.org)
18:47 tjstansell not all that much to it... but it's worked for us.
18:48 tziOm joined #gluster
18:49 bcdonadio tjstansell, thank you very much! ^^
18:50 tjstansell sure, no problem.
18:50 bcdonadio i think i just need to add the lsb headers to work on ubuntu
18:52 daMaestro joined #gluster
18:57 rfortier joined #gluster
19:04 \_pol joined #gluster
19:09 bcdonadio S1r1M2f0!911
19:09 bcdonadio ops, wrong window
19:15 anands joined #gluster
19:16 \_pol_ joined #gluster
19:25 nueces joined #gluster
19:27 B21956 joined #gluster
19:49 Staples84 joined #gluster
19:53 an joined #gluster
20:02 bennyturns joined #gluster
20:10 bcdonadio I just cloned a VM containing Gluster, how should I trigger the generation of the node UUID?
20:13 JoeJulian node?
20:16 semiosis bcdonadio: stop glusterd, delete everything in /var/lib/glusterd (particularly the glusterd.info file containing the current uuid) then restart glusterd, a new uuid will be generated
20:18 bcdonadio semiosis, only /var/lib/glusterd? I saw the UUID being written to /etc/glusterd/glusterd.info too...
20:19 bcdonadio hmmmm, in fact, I have both directories
20:19 bcdonadio with different uuids
20:20 JoeJulian if you have an /etc/glusterd then you have an old version.
20:20 bcdonadio JoeJulian, probably, I installed the version on the ubuntu repo first
20:21 bcdonadio removed both anyways
20:21 semiosis both?  maybe you upgraded and old files were left around in /etc/glusterd, because that got moved to /var/lib/glusterd in 3.3.0.  you can use lsof to see what is in use of course.
20:25 \_pol joined #gluster
20:31 nueces joined #gluster
20:37 rfortier joined #gluster
20:41 andreask joined #gluster
20:56 bcdonadio Where should I report a bug with the Ubuntu package on PPA?
20:56 diegows_ joined #gluster
20:57 anands joined #gluster
20:58 \_pol joined #gluster
21:00 kkeithley_ semiosis: ^^^
21:01 semiosis bcdonadio: just start talking, i'm listening
21:03 bcdonadio semiosis, in /etc/init/mounting-glusterfs.conf, in WAIT_FOR= there should be "glusterfs-server", and not "static-network-up"
21:03 bcdonadio well, the should be also a way to check if the network is up, but the service "static-network-up" doesn't exist and throws an error at upstart log
21:05 bcdonadio http://fpaste.org/39219/13790199/
21:05 glusterbot Title: #39219 Fedora Project Pastebin (at fpaste.org)
21:15 semiosis bcdonadio: what ubuntu release are you using?
21:15 jcsp joined #gluster
21:16 bcdonadio semiosis, precise
21:16 semiosis static-network-up is not a service, it is an event
21:16 semiosis http://manpages.ubuntu.com/manpages​/precise/man7/upstart-events.7.html
21:16 glusterbot <http://goo.gl/nP1LXk> (at manpages.ubuntu.com)
21:17 bcdonadio so, this event doesn't exist in precise?
21:17 semiosis that is the man page for precise...
21:17 semiosis the event is listed there, it exists
21:17 semiosis what is the problem you're having?
21:18 semiosis [14:29] <bcdonadio> the real problem is that when it fails to mount the volume at boot time, it stops the boot process and stays there, waiting for an input
21:18 semiosis bcdonadio: add 'nobootwait' option to the fstab line
21:18 semiosis see http://manpages.ubuntu.com/manpa​ges/precise/en/man5/fstab.5.html
21:18 glusterbot <http://goo.gl/N1eLJy> (at manpages.ubuntu.com)
21:20 semiosis kkeithley_: thanks for pinging me about this :)
21:20 bcdonadio semiosis, cool! now it at least doesn't stop the boot anymore
21:20 semiosis rock & roll
21:20 bcdonadio but doesn't mount the volume either
21:21 semiosis bcdonadio: ok, we can deal with that.  first clear out the client log (delete or truncate) in /var/log/glusterfs/client-mount-point.log
21:21 semiosis then reboot so we get a fresh log, and pastie.org that
21:21 semiosis also could you paste your fstab line?
21:21 bcdonadio yes, wait a minute
21:25 bcdonadio hmmm, now  it is mounting, but I didn't see it because it does only after a minute or so that the machine is up
21:25 semiosis \o/
21:25 bcdonadio (the log you referenced to doesn't exist, though)
21:25 semiosis thats odd
21:26 semiosis client log should be somewhere... you could check the client process for a log file argument with ps
21:28 * kkeithley_ still wants .../extras/UbuntuDEBs/* in the glusterfs sources
21:30 bcdonadio semiosis, http://fpaste.org/39229/37902138/
21:30 glusterbot Title: #39229 Fedora Project Pastebin (at fpaste.org)
21:30 bcdonadio the log being generated is for nfs, but I thought I was using the native client...
21:37 bcdonadio semiosis, apparently the nobootwait solved all problems, except for the error being thrown at /var/log/upstart/wait-for-state-moun​ting-glusterfsstatic-network-up.log
21:37 semiosis idk what that file is
21:37 bcdonadio semiosis, http://fpaste.org/39234/21863137/
21:37 glusterbot Title: #39234 Fedora Project Pastebin (at fpaste.org)
21:38 tjstansell bcdonadio: what's your fstab entry look like for this mount?
21:40 bcdonadio 127.0.0.1:/vg0          /home/vg0glusterfs defaults,nobootwait      0       0
21:42 tjstansell and what does 'mount' say as to the type of mount it is?
21:42 tjstansell i'm curious why it would log to nfs.log ...
21:42 tjstansell you should have a /var/log/glusterfs/home-gv0.log (or something like that) if it's using a fuse mount
21:42 bcdonadio 127.0.0.1:/vg0 on /home/vg0 type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072)
21:42 bcdonadio the output of mount
21:43 bcdonadio tjstansell, I also have this file
21:45 johan___1 joined #gluster
21:45 bcdonadio tjstansell, those are the gluster processes: http://fpaste.org/39235/22322137/
21:45 glusterbot Title: #39235 Fedora Project Pastebin (at fpaste.org)
21:45 johan___1 hi.. quick question
21:46 bcdonadio the are two, and one is being explicitely being told to write log on nfs.log
21:46 johan___1 what is the most common network interconnect and what is a typical size of a glusterfs-cluster
21:46 tjstansell sure. that's just the nfs part of gluster firing up to handle nfs requests.  that's normal unless you disable nfs on your volumes.
21:47 johan___1 what is the largest feasble number of ndoes
21:47 tjstansell the other log is the client log ... /var/log/glusterfs/<mount-point>.log
21:48 bcdonadio and do you have any idea about the upstart error?
21:49 tjstansell no idea about upstart.
21:49 theron joined #gluster
21:53 JoeJulian johan___1: I would probably guess that the most common is 1gig ethernet. I don't know if there is a typical size. They're usually pretty need specific.
21:55 mooperd_ joined #gluster
21:55 johan___1 ok.. so using IB is not the standard approach.. :) IB limits the possible number of servers drastically
21:55 JoeJulian johan___1: The largest feasible number of nodes... 4294967296 (if you don't connect it to the internet) with ipv4.
21:56 johan___1 I detect sarcasm
21:56 JoeJulian In a non-sarcastic way, why does IB limit the possible number of servers?
21:57 johan___1 I was more curios in what was actually common in terms of interconnect
21:57 johan___1 the most usecases I've read about is between 10-20 servers
21:58 johan___1 just curious in if that is considered a small/medium/large cluster etc
21:58 JoeJulian Doesn't IB use ipv4 for establishing rdma, then handle everything through that? Seems like there should be the same ip-based limitation in that case.
21:58 johan___1 and what is in fact considered a large production cluster, both in terms of overall storage capacity and number of nodes
21:59 JoeJulian I like to plan for 100 servers and 100,000 clients.
21:59 JoeJulian I have 3 servers and ~20 clients. :D
22:02 johan___1 that's what it seems like
22:02 JoeJulian Why do you ask?
22:03 JoeJulian Oh, and to be fair, the sarcasm you detected was at the word, "node". It's the equivalent of "smurf" to me with is ambiguity.
22:05 JoeJulian "Common", btw, is going to be the price to performance ratio that best suits the needs of the project. Most commonly that's 1gig ethernet since it's pretty cheap.
22:06 JoeJulian There are statistically rarer use cases that require infiniband or 10gig, and on the lower end, I've heard of someone still using 100mbit (no idea why?!?!)
22:15 jporterfield joined #gluster
22:43 looped joined #gluster
22:44 looped hi. is it possible to do a live upgrade from gluster 3.2 (which is available in ubuntu 12.04 repositories) to 3.4 (from the ppa)?
22:45 geewiz joined #gluster
22:46 geewiz Hi, I just rebuilt a crashed Gluster node and added it back as explained in http://gluster.org/community/documen​tation/index.php/Gluster_3.2:_Brick_​Restoration_-_Replace_Crashed_Server. But it doesn't start to self-heal.
22:46 glusterbot <http://goo.gl/uiC77u> (at gluster.org)
22:48 geewiz The glusterd.vol.log says "reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:991)"
22:48 geewiz Any hints at where I should start looking?
22:53 JoeJulian ~3.4 upgrade notes | looped
22:53 glusterbot looped: http://goo.gl/SXX7P
23:05 tjstansell careful about the change in port number used for the bricks though.  i had a problem where the clients wouldn't pick up the new port numbers, but that might be because my clients were the servers too.
23:10 bennyturns joined #gluster
23:11 looped JoeJulian: i followed http://vbellur.wordpress.com/2012/​05/31/upgrading-to-glusterfs-3-3/ - but after i've run the process, i can't mount the volume. gluster volume status gives me : Another transaction could be in progress. Please try again after sometime.
23:11 glusterbot <http://goo.gl/qOiO7> (at vbellur.wordpress.com)
23:15 zerick joined #gluster
23:16 JoeJulian looped: So you're upgrading to 3.3?
23:17 JoeJulian geewiz: try "gluster volume heal $vol", check with "gluster volume heal $vol status". Make sure the self-heal daemon is running with "gluster volume status $vol"
23:20 looped JoeJulian: no, im upgrading from 3.2->3.4. but doing the extra steps stated in the 3.2->3.3 guide (i.e. copying over the files from /etc/glusterd to /var/lib/glusterd and running the glusterd --xlator-option *.upgrade=on -N command)
23:27 jporterfield joined #gluster
23:30 geewiz JoeJulian: Thanks! Unfortunately, it's a 3.2 cluster... :-(
23:44 badone joined #gluster
23:46 jporterfield joined #gluster
23:50 StarBeast joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary