Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 Vaelatern joined #gluster
00:16 xMopxShell joined #gluster
00:19 ahino joined #gluster
00:57 johnmilton joined #gluster
01:01 itisravi joined #gluster
01:16 plarsen joined #gluster
01:16 kimmeh joined #gluster
01:25 derjohn_mob joined #gluster
01:29 nobody481 joined #gluster
01:31 nobody481 I've messed up one of my gluster configurations and I'm trying to start over from scratch.  The problem is I have one host where I cannot detach a peer no matter what I try.  The problem host doesn't show up on any of the other hosts, but the problem host has 1 peer listed.  I even tried uninstalling glusterfs-server and reinstalling, but it's still there.  Any ideas?
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:54 JoeJulian nobody481: if you're trying to start from scratch, wipe /var/lib/glusterd on all your servers.
01:55 JoeJulian nobody481: and delete the brick directories.
01:55 plarsen joined #gluster
01:56 nobody481 JoeJulian: I'll try that, thank you!
02:12 blu__ joined #gluster
02:43 RameshN joined #gluster
02:45 matiu joined #gluster
02:45 matiu Why is it not recommended to put glusterfs bricks on the root partition ?
02:56 nishanth joined #gluster
03:03 Gambit15 joined #gluster
03:11 kramdoss_ joined #gluster
03:11 nbalacha joined #gluster
03:12 cliluw matiu: It's a fail-safe. If the physical disk of the brick is unmounted, Gluster will fail loudly. If you use the root partition, Gluster will still work - it would just be writing to the empty folder of the mount point.
03:13 matiu so it's not recommended because, if your root partition fills up for example, gluster will not fail loudly ?
03:14 cliluw matiu: I suppose that would cause Gluster to fail loudly but it won't immediately fail.
03:14 cliluw matiu: If you mount a subdirectory of the partition, that subdirectory will immediately disappear if the disk is unmounted so you will detect the failure earlier.
03:16 matiu cliluw, so we shouldn't use bindmounted directories either then eh ?
03:17 cliluw matiu: Yeah, unless your fine with potentially filling up your root partition.
03:17 matiu I get ya
03:17 matiu thanks cliluw
03:18 cliluw matiu: Note that that's the root partition of the Gluster server, not the root partition on the brick.
03:19 matiu ah ok, I don't get it now :)
03:19 matiu so what can go wrong if the gluster brick is on the root partition ?
03:19 matiu and why would putting on a secondary disk help
03:20 matiu sorry, I understand better with concrete examples
03:20 magrawal joined #gluster
03:22 cliluw matiu: Don't be sorry - I'm just bad at explaining.
03:24 cliluw matiu: I run my Gluster server off a SSD that's only 8 GiB large. This is small but it's more than enough to store a whole Ubuntu Server distribution.
03:25 cliluw matiu: I plug in a large external USB hard drive that's 1024 GiB and mount it to the directory /mnt/gluster-brick. I want to use this external hard drive as a brick.
03:26 cliluw matiu: If I tell Gluster that the brick path is /mnt/gluster-brick/, it will work fine under normal operation. The problem is when I accidentally unplug the external hard drive.
03:26 cliluw matiu: If I unplug the hard drive, Gluster will still be writing to /mnt/gluster-brick/, except /mnt/gluster-brick is no longer a mount because the mounted device has been unplugged. Gluster would actually be writing to my small 8 GiB SSD.
03:27 baojg joined #gluster
03:28 cliluw matiu: I instead want to mount /mnt/gluster-brick/ and make a subdirectory in it like /mnt/gluster-brick/a. If the external hard drive is ever unplugged, that folder "a" will disappear and stop Gluster from writing to my SSD.
03:30 matiu ok, I get that bit :)
03:31 matiu I'm wondering though, when I go 'gluster create volume' it says "Don't put your brick on the root partiton"
03:34 cliluw matiu: If you really want to use the root partition, you can use "mode=script" or add a "force" to your command.
03:35 matiu yeah, I'm adding 'force'
03:35 matiu but I feel like I should know what I'm forcing :)
03:36 matiu like it's not recommended, and I'm sure there's a reason for that
03:36 matiu and I'm ignorantly going, who cares! create my brick on root partition
03:37 gem joined #gluster
03:51 mchangir joined #gluster
03:58 atinm joined #gluster
04:00 aspandey joined #gluster
04:02 Wizek_ joined #gluster
04:06 matiu if I have a brick directory, but I've run 'gluster delete volume gluster1'
04:06 matiu but the brick directory is still there
04:06 matiu can build a gluster volume out of that ?
04:20 itisravi joined #gluster
04:21 itisravi joined #gluster
04:22 Lee1092 joined #gluster
04:26 Muthu joined #gluster
04:35 ppai joined #gluster
04:46 sanoj joined #gluster
04:46 nishanth joined #gluster
04:50 prasanth joined #gluster
04:50 rafi joined #gluster
04:54 prasanth joined #gluster
05:08 ankitraj joined #gluster
05:10 jiffin joined #gluster
05:10 prasanth joined #gluster
05:13 aspandey joined #gluster
05:14 skoduri joined #gluster
05:14 Gnomethrower joined #gluster
05:16 karthik_ joined #gluster
05:18 kimmeh joined #gluster
05:23 ndarshan joined #gluster
05:31 RameshN joined #gluster
05:39 Bhaskarakiran joined #gluster
05:49 hgowtham joined #gluster
05:53 muneerse joined #gluster
05:56 mhulsman joined #gluster
05:57 RameshN joined #gluster
05:58 mchangir joined #gluster
05:58 aravindavk joined #gluster
05:59 k4n0 joined #gluster
06:00 ramky joined #gluster
06:01 kshlm joined #gluster
06:01 ashiq joined #gluster
06:03 poornima joined #gluster
06:05 mhulsman1 joined #gluster
06:09 Saravanakmr joined #gluster
06:14 morse joined #gluster
06:15 prth joined #gluster
06:16 karnan joined #gluster
06:18 jtux joined #gluster
06:21 ankitraj joined #gluster
06:22 itisravi joined #gluster
06:23 kdhananjay joined #gluster
06:33 itisravi joined #gluster
06:35 Muthu joined #gluster
06:37 aspandey joined #gluster
06:46 karthik_ joined #gluster
06:46 ankit-raj joined #gluster
06:47 aspandey_ joined #gluster
06:48 itisravi_ joined #gluster
06:49 ashiq joined #gluster
06:50 ankitraj joined #gluster
06:51 Saravanakmr joined #gluster
06:53 mchangir joined #gluster
06:57 mhulsman joined #gluster
06:57 jkroon joined #gluster
07:01 ashiq_ joined #gluster
07:01 Saravanakmr_ joined #gluster
07:01 harish joined #gluster
07:01 ankit-raj joined #gluster
07:07 Saravanakmr_ joined #gluster
07:07 ankit-raj joined #gluster
07:08 itisravi_ joined #gluster
07:13 aspandey joined #gluster
07:16 jri joined #gluster
07:17 ankitraj joined #gluster
07:20 ashiq_ joined #gluster
07:21 mhulsman joined #gluster
07:22 baudster joined #gluster
07:26 aspandey_ joined #gluster
07:26 prth joined #gluster
07:27 itisravi joined #gluster
07:27 mhulsman1 joined #gluster
07:29 satya4ever joined #gluster
07:30 mhulsman joined #gluster
07:33 fsimonce joined #gluster
07:34 robb_nl_ joined #gluster
07:41 ashiq_ joined #gluster
07:41 karthik_ joined #gluster
07:42 arcolife joined #gluster
07:42 Saravanakmr_ joined #gluster
07:50 Gnomethrower joined #gluster
07:54 kimmeh joined #gluster
08:00 roost joined #gluster
08:03 armyriad joined #gluster
08:06 jwd joined #gluster
08:08 robb_nl_ joined #gluster
08:08 d0nn1e joined #gluster
08:09 jkroon joined #gluster
08:12 baudster ola everyone!
08:12 baudster Hi everyone
08:15 mchangir joined #gluster
08:21 devyani7 joined #gluster
08:32 aravindavk joined #gluster
08:37 devyani7 joined #gluster
08:37 ahino joined #gluster
08:48 karnan joined #gluster
08:53 Slashman joined #gluster
08:54 ieth0 joined #gluster
09:00 raghug joined #gluster
09:03 ahino joined #gluster
09:10 derjohn_mob joined #gluster
09:17 ndarshan joined #gluster
09:18 madmatuk joined #gluster
09:18 madmatuk joined #gluster
09:20 [diablo] joined #gluster
09:21 RameshN joined #gluster
09:29 jwd joined #gluster
09:31 aravindavk joined #gluster
09:35 gem joined #gluster
09:36 Manikandan joined #gluster
09:38 robb_nl_ joined #gluster
09:39 ndarshan joined #gluster
09:40 Manikandan joined #gluster
09:40 karnan joined #gluster
09:41 yalu joined #gluster
09:46 ieth0 joined #gluster
09:47 RameshN joined #gluster
09:52 rastar joined #gluster
09:53 atinm joined #gluster
09:54 RameshN joined #gluster
09:58 ahino joined #gluster
10:11 kimmeh joined #gluster
10:19 [diablo] joined #gluster
10:23 RameshN joined #gluster
10:27 ashah joined #gluster
10:30 Philambdo joined #gluster
10:32 shyam joined #gluster
10:33 rafi joined #gluster
10:40 Wizek_ joined #gluster
10:45 ndarshan joined #gluster
10:47 atinm joined #gluster
10:51 nishanth joined #gluster
10:52 karnan joined #gluster
10:56 mhulsman joined #gluster
10:56 ira joined #gluster
10:58 RameshN joined #gluster
10:59 mhulsman1 joined #gluster
11:00 mchangir joined #gluster
11:00 plarsen joined #gluster
11:06 aravindavk joined #gluster
11:18 ahino joined #gluster
11:18 baojg joined #gluster
11:20 philiph joined #gluster
11:25 gem joined #gluster
11:27 tom[] joined #gluster
11:30 shruti joined #gluster
11:30 karthik_ joined #gluster
11:30 poornima joined #gluster
11:30 rjoseph|afk joined #gluster
11:30 jiffin joined #gluster
11:30 rafi joined #gluster
11:30 itisravi joined #gluster
11:30 aravindavk joined #gluster
11:30 kdhananjay joined #gluster
11:30 ashah joined #gluster
11:31 philiph left #gluster
11:31 lalatenduM joined #gluster
11:31 skoduri joined #gluster
11:31 pkalever joined #gluster
11:31 kshlm joined #gluster
11:31 ppai joined #gluster
11:31 Saravanakmr_ joined #gluster
11:31 hgowtham joined #gluster
11:32 arcolife joined #gluster
11:32 atinm joined #gluster
11:32 sac joined #gluster
11:32 ashiq_ joined #gluster
11:32 raghug joined #gluster
11:32 aspandey_ joined #gluster
11:32 RameshN joined #gluster
11:33 ieth0 joined #gluster
11:34 jiffin1 joined #gluster
11:34 nishanth joined #gluster
11:35 karnan joined #gluster
11:35 ndarshan joined #gluster
11:36 Manikandan_ joined #gluster
11:40 rastar joined #gluster
11:41 B21956 joined #gluster
11:44 rafi1 joined #gluster
11:48 B21956 joined #gluster
11:53 ira joined #gluster
11:56 Philambdo joined #gluster
11:56 mchangir joined #gluster
12:01 RameshN joined #gluster
12:01 hagarth joined #gluster
12:05 jiffin1 joined #gluster
12:06 Philambdo joined #gluster
12:08 kdhananjay joined #gluster
12:14 plarsen joined #gluster
12:17 plarsen joined #gluster
12:25 aravindavk joined #gluster
12:27 karthikus joined #gluster
12:32 kotreshhr joined #gluster
12:35 armin ohai. i'm looking into performance tuning of glusterfs. i right now have a replica 3 cluster across 3 VMs which all run on the same hypervisor host (qemu/kvm). actual question: i'm wondering if i hit the bottleneck of the underlying hypervisor layer anyways so striping wouldn't give me much better rates. right now i get about 1845/814/0 iops (in fio) with about ~8mbyte/sec.
12:36 armin i have that volume mounted on my notebook (the hypervisor) which runs those 3 VMs inside qemu/kvm and consider that value pretty low for a system with SSD.
12:36 mchangir joined #gluster
12:36 armin i already tested with NFSv3/tcp as well.
12:36 armin any suggestions very much welcome!
12:38 cloph armin: your terminology is unclear.
12:39 armin cloph: can you elaborate? i'm happy to clarify.
12:39 harish joined #gluster
12:39 cloph so you got three peers, each of which in a VM that run on the same host, and you use those three VMs to setup a gluster cluster consisting of three peers, and you created a replica 3 volume on that cluster. That's how I parse the description...
12:40 armin that's correct.
12:40 cloph in any case: Striping is not what you want...
12:41 armin well i try to find a method to identify the actual bottleneck.
12:41 cloph it's considered obsolete...
12:41 armin ok.
12:41 [diablo] joined #gluster
12:41 cloph @stripes
12:41 glusterbot cloph: I do not know about 'stripes', but I do know about these similar topics: 'stripe'
12:41 cloph @stripe
12:41 glusterbot cloph: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
12:41 cloph the replacement for that is sharding
12:41 cloph @sharding
12:41 glusterbot cloph: for more details about sharding, see http://blog.gluster.org/2015/1​2/introducing-shard-translator
12:42 shyam joined #gluster
12:43 ankitraj joined #gluster
12:44 ankitraj joined #gluster
12:46 armin cloph: thank you. i tried with and without sharding and that makes almost no different. throughput is slightly higher with sharding set to off.
12:47 cloph sharding helps for healing, in "all is perfect" then it is a slight performance decrease.
12:47 armin cloph: it's likely that i do things wrong or that i did not properly understand obvious things because i'm a noob.
12:47 armin ok.
12:48 cloph well, first thing to check is whether you have filesystem with large enough inode size, so gluster doesn't need to do two writes for the extended attributes...
12:48 devyani7 joined #gluster
12:48 cloph and then if you're using VMs, the caching mode/disk emulation has an impact.
12:48 RameshN joined #gluster
12:48 cloph So to measure gluster performance impact, you should do simultaneous "native VM" performance checks as well.
12:50 unclemarc joined #gluster
12:56 baudster joined #gluster
12:57 mhulsman joined #gluster
12:57 Mattias joined #gluster
12:58 kdhananjay joined #gluster
12:58 mhulsman1 joined #gluster
13:01 mchangir joined #gluster
13:03 hagarth joined #gluster
13:04 nbalacha joined #gluster
13:13 shyam joined #gluster
13:18 baudster joined #gluster
13:21 baudster I'm having a weird issue
13:21 baudster for some reason, i can't mount the gluster volume through fstab
13:21 baudster but i can mount it manually through the command line
13:21 Muthu joined #gluster
13:21 Klas show us fstab and the mount command
13:21 Klas then it's possible to see what might differ
13:22 baudster sure, hold on let me connect to the server
13:22 jvandewege joined #gluster
13:25 cloph and if what you really mean by "can not mount via fstab" is "it won't auto-mount at boot", then it's a completely different problem :-)
13:25 mreamy joined #gluster
13:26 baudster fstab -->   dc1-wp-gluster-clust-01:/glustervol0   /glustervol0    glisters    defaults,_netdev   0   0
13:26 baudster command -->  /sbin/mount.glusterfs dc1-wp-gluster-clust-01:glustervol0 /glustervol0
13:26 cloph glisters ?
13:27 baudster sorry, it's glusters
13:27 baudster sorry, it's glusterfs
13:27 mhulsman joined #gluster
13:28 baudster darn autocomplete
13:28 cloph so what is the output when you do a mount -v /glustervol0?
13:30 baudster Illegal option -v
13:31 ashiq_ joined #gluster
13:31 baudster if it's -V = mount from util-linux 2.27.1 (libmount 2.27.0: selinux, assert, debug)
13:32 Klas my guess, it doesn't resolve the implied domain
13:32 Klas also, please paste instead of writing, might be a typo =P
13:33 baudster dc1-wp-gluster-clust-01:/glustervol0    /glustervol0    glusterfs       defaults,_netdev        0       0
13:33 baudster /sbin/mount.glusterfs dc1-wp-gluster-clust-01:glustervol0 /glustervol0
13:34 baudster domain and corresponding ip are all hardcoded in /etc/hosts and resolves properly
13:36 mchangir joined #gluster
13:37 ieth0 joined #gluster
13:38 baudster any ideas?
13:41 cloph if your mount doesn't understand -v = verbose, look up what it expects to be verbose flag and try again with that...
13:42 cloph or try without the -v flag and see what it complains about...
13:43 cloph it is rare that "can't mount" comes without any error message whatsoever, so let us know why mount thinks it cannot do the job
13:43 baudster if i just do "mount /glustervol" on the command line, it goes through without any issue
13:43 Saravanakmr joined #gluster
13:43 baudster mounts the volume straight away
13:44 cloph ok, so where comes your "for some reason, i can't mount the gluster volume through fstab" from then?=
13:44 mhulsman joined #gluster
13:44 baudster during boot that is
13:46 cloph 15:25 <cloph> and if what you really mean by "can not mount via fstab" is "it won't auto-mount at boot", then it's a completely different problem :-)
13:46 cloph → so a) if dc1-wp-gluster-clust-01 is the same host as where you try to mount it on: Try to use a different host from the cluster
13:47 cloph if b) it is a different host, then chances are that _netdev is ignored/that you need to depend on a different target instead.
13:48 cloph so far I myself am also looking for a proper way to have a mount via the local node work.
13:48 cloph at least in debian jessie there's no suitable target to depend on.
13:48 cloph As the gluster service returns before it is fully setup..
13:49 baudster hmm
13:49 baudster yeah, mounting a diffferent node on hte cluster works
13:51 cloph so your system attempts to mount before local gluster service is done with startup.. As I'm in the same boat, I don't have nice solution. But if you can point to another host, then no problem anyway. (I could use it for geo-replication only host, i.e. a gluster consisting of only one peer)
13:53 skylar joined #gluster
13:53 mchangir joined #gluster
13:54 baudster how about mounting it through rc.local?
13:55 cloph that is one of "hacks" to workaround
13:55 cloph (or nothing I'd consider "proper way" - works, but is kinda cheating...
13:58 mhulsman1 joined #gluster
13:59 jns joined #gluster
14:01 shyam joined #gluster
14:01 Klas personally, we mount via a startup script
14:02 Klas since fallback option seemed broken
14:02 Klas (yes, we will file a bug report at some point)
14:02 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
14:02 Klas hahaha, awesome
14:05 bowhunter joined #gluster
14:12 baudster Klas: you run it on redhat/centos?
14:13 mchangir joined #gluster
14:13 Klas baudster: nope, ubuntu
14:14 Klas but with our own version
14:14 Klas not ubuntu package
14:15 baudster how do you do it through a startup script exactly?
14:16 Klas I'll post it
14:16 baudster that would be great
14:16 Klas @paste
14:16 glusterbot Klas: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
14:17 Klas http://termbin.com/1g50
14:17 Klas I'll post depended script as well
14:18 baudster you run which version of ubuntu?
14:18 Klas trusty and xenial
14:18 baudster this works on xenial as well?
14:18 Klas http://termbin.com/2ai9
14:18 Klas yes it does
14:18 Klas might not be perfect, but it seems to work well for us
14:19 baudster good stuff Klas!
14:19 cloph https://bugzilla.redhat.co​m/show_bug.cgi?id=1317559
14:19 baudster thanks a heap
14:19 glusterbot Bug 1317559: medium, medium, ---, bugs, ASSIGNED , Glusterfs mounts in fstab are not mounted at boot
14:19 Klas baudster: thanks, hope it works for you as well =)
14:20 Klas the status thing is missing, and not good for you either
14:20 Klas so remove that part
14:20 baudster cool.
14:20 Klas (it assumes a bunch of stuff which will not be true for you)
14:20 baudster what kernel do you have on your xenial boxes?
14:20 Klas generally, the latest
14:20 Klas so we've tried everyone the last couple of weeks or so
14:21 mchangir joined #gluster
14:22 squizzi joined #gluster
14:24 Klas baudster: btw, if you find some bug with script, please report it to me so that I can fix it here as well ;)
14:24 Manikandan joined #gluster
14:24 Klas I believe we have tried pretty much everything
14:24 Klas but we might've missed something
14:25 baudster sure would
14:26 cloph bug: is not a systemd one :-P
14:27 Klas cloph: true, but it is converted into one by systemd
14:27 Klas also, I suck at systemd =P
14:27 Klas not really against it anymore, just haven't had the time to learn yet
14:32 kramdoss_ joined #gluster
14:34 bowhunter joined #gluster
14:34 shyam joined #gluster
14:35 hagarth joined #gluster
14:37 BitByteNybble110 joined #gluster
14:40 Lee1092 joined #gluster
14:43 gem joined #gluster
14:48 mchangir joined #gluster
14:57 level7 joined #gluster
14:58 cholcombe joined #gluster
14:59 kotreshhr left #gluster
15:03 nage joined #gluster
15:05 moss joined #gluster
15:09 roost joined #gluster
15:20 nishanth joined #gluster
15:33 arcolife joined #gluster
15:36 shyam joined #gluster
15:39 kdhananjay joined #gluster
15:42 ashiq_ joined #gluster
15:46 ieth0 joined #gluster
15:47 ankitraj joined #gluster
15:59 jiffin joined #gluster
16:05 jiffin joined #gluster
16:13 Manikandan joined #gluster
16:16 ieth0 joined #gluster
16:18 jkroon joined #gluster
16:23 Intensity joined #gluster
16:23 Gambit15 joined #gluster
16:39 nbalacha joined #gluster
16:39 kimmeh joined #gluster
16:42 rafi joined #gluster
16:43 hagarth joined #gluster
17:08 ankitraj joined #gluster
17:15 ieth0 joined #gluster
17:20 plarsen joined #gluster
17:26 gem joined #gluster
17:27 jiffin joined #gluster
17:33 gem joined #gluster
17:39 ppai joined #gluster
17:42 jwd joined #gluster
17:42 roost joined #gluster
17:43 jiffin joined #gluster
17:56 cliluw joined #gluster
18:09 ieth0 joined #gluster
18:22 hagarth joined #gluster
18:23 shyam joined #gluster
18:28 guhcampos joined #gluster
18:31 arcolife joined #gluster
18:32 [diablo] joined #gluster
18:34 mhulsman joined #gluster
18:38 Jacob843 joined #gluster
18:40 rafi1 joined #gluster
18:51 matiu joined #gluster
19:02 cliluw joined #gluster
19:02 derjohn_mob joined #gluster
19:03 ieth0 joined #gluster
19:05 baojg joined #gluster
19:09 plarsen joined #gluster
19:19 geekabit joined #gluster
19:22 geekabit hi guys, can anyone help me out please? I just did an ubuntu upgrade from 12.04 lts to 16.04.1 lts. This also upgraded glusterfs from 3.4.2 to 3.7.6. I did the upgrade only one of the servers and stopped gluterfs on the other server.
19:22 geekabit and guess what, glusterfs doesn't come online on the upgraded server
19:23 geekabit glusterfs logfile said 'failed to get the volume file from server'
19:23 geekabit I'm not sure where to start
19:24 geekabit gluster volume stats said everythig is offline and n/a
19:24 geekabit I would have expcted to see at least my local server there
19:26 geekabit disabled the firewall, but that doesn't seem to make a difference
19:26 ieth0 joined #gluster
19:30 arcolife joined #gluster
19:39 JoeJulian 1st, I wouldn't recommend 3.7.6. Use the latest 3.7 from the ,,(ppa).
19:39 glusterbot The official glusterfs packages for Ubuntu are available here: 3.6: http://goo.gl/XyYImN, 3.7: https://goo.gl/aAJEN5, 3.8: https://goo.gl/gURh2q
19:39 JoeJulian geekabit ^
19:39 geekabit thanks
19:39 JoeJulian geekabit: 2nd, I suspect the servers did not come up because there's no quorum (new feature)
19:40 JoeJulian you can force them with "gluster volume start $volname force"
19:46 geekabit JoeJulian: upgraded to the ppa version and now my brick is online again! Let's see if I can mount it.
19:50 geekabit JoeJulian: according to gluster volume status the brick is working. I can start and stop it with gluster volume start and stop. However, when I try to moun it, I still get the 'failed to get the volume file from server' message
19:52 plarsen joined #gluster
19:53 JoeJulian Check the client log
19:53 hagarth joined #gluster
19:53 plarsen joined #gluster
19:53 JoeJulian Usually that's a version problem or a firewall problem.
19:54 JoeJulian @ports
19:54 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
19:54 plarsen joined #gluster
19:55 geekabit the firewall is disabled. I confirmed this with iptables -vnL. When I nmap the server I only see 49152 online, no sign of the management ports
19:56 JoeJulian Sounds like glusterfsd didn't get stopped and glusterd's failing to start.
19:56 JoeJulian ... which probably means that the bricks are still running 3.4
19:57 geekabit I have stopped/started everything manually several times.
19:57 geekabit ah, I see the problem
19:57 JoeJulian Check /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
19:58 JoeJulian Stopping glusterfs-server does not stop glusterfsd (the bricks). This is intentional so you can restart the management daemon without interfering with your volume.
20:02 geekabit okay, I stopped both bricks (the updated one and the original one) and killed all processes with a name containing gluster
20:05 geekabit the filesystem is exported via samba. Both fileservers are bricks but also mount their own glusterfs and re-export it via samba
20:07 geekabit I started glusterfs and started the brick using the gluster command. According to 'gluster volume status' the brick is online. But when I nmap the only open port is 49152. The management is not started apperently
20:15 geekabit just did a sanity check. With everything disabled on the new server, I used the exact same commands to start everything on the original server. It all worked immediately.
20:16 ankitraj geekabit, how to check whether my code is accoerding to guideline or not?
20:16 ankitraj everytime I am doing ./rfc.sh it says some coding guidelines mistake.. how to fix that?
20:18 ankitraj JoeJulian, ?
20:19 geekabit ankitraj: I wouldn't know. I'm also asking questions here. JoeJulian is helping me out.
20:19 ankitraj oh then JoeJulian ^^
20:19 JoeJulian ankitraj: I suggest asking in #gluster-dev or on the devel mailing list. There's more devs there, I'm just a user.
20:21 matiu joined #gluster
20:21 matiu joined #gluster
20:21 guhcampos joined #gluster
20:31 JoeJulian Sorry, geekabit, I was tied up in a meeting. Check the glusterd log I mentioned up there.
20:41 kimmeh joined #gluster
21:19 geekabit J
21:19 geekabit JoeJulian: np. Thanks for the support. It's getting late here.. I'm off to bed. Will continue the debugging next week
21:28 geekabit left #gluster
21:36 shyam joined #gluster
21:47 sage_ joined #gluster
21:47 overclk joined #gluster
21:48 siel joined #gluster
21:51 d0nn1e joined #gluster
21:51 uebera|| joined #gluster
21:51 uebera|| joined #gluster
21:51 shyam joined #gluster
21:52 sankarshan_away joined #gluster
21:53 hagarth joined #gluster
21:54 PotatoGim joined #gluster
21:55 virusuy joined #gluster
21:55 sadbox joined #gluster
21:58 semiosis joined #gluster
21:58 semiosis joined #gluster
21:59 anoopcs joined #gluster
22:00 Guest89761 joined #gluster
22:00 hchiramm joined #gluster
22:05 semiosis joined #gluster
22:19 ieth0 joined #gluster
22:44 semiosis joined #gluster
22:44 semiosis joined #gluster
22:47 baojg_ joined #gluster
23:10 ieth0 joined #gluster
23:33 Jacob843 joined #gluster
23:37 jeremyh joined #gluster
23:54 snehring joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary