Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 beemobile joined #gluster
00:14 sandersr joined #gluster
00:18 beemobile joined #gluster
00:19 victori joined #gluster
00:20 harish joined #gluster
00:57 plarsen joined #gluster
01:12 shdeng joined #gluster
01:13 shdeng joined #gluster
01:27 victori joined #gluster
01:27 Lee1092 joined #gluster
01:35 ic0n joined #gluster
01:35 prth joined #gluster
01:36 derjohn_mobi joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:08 harish joined #gluster
02:23 skoduri joined #gluster
02:27 kshlm joined #gluster
02:39 hchiramm joined #gluster
02:48 suliba joined #gluster
03:03 kramdoss_ joined #gluster
03:07 raghug joined #gluster
03:16 magrawal joined #gluster
03:41 victori joined #gluster
03:47 kdhananjay joined #gluster
03:49 Gnomethrower joined #gluster
03:50 Saravanakmr joined #gluster
03:54 harish joined #gluster
03:59 prth joined #gluster
03:59 nbalacha joined #gluster
04:00 rafi joined #gluster
04:10 shubhendu joined #gluster
04:13 sanoj joined #gluster
04:14 beemobile_ joined #gluster
04:17 beemobile joined #gluster
04:20 nbalacha joined #gluster
04:21 itisravi joined #gluster
04:25 riyas joined #gluster
04:28 jkroon joined #gluster
04:43 hgowtham joined #gluster
04:44 k4n0 joined #gluster
04:45 victori joined #gluster
05:02 jiffin joined #gluster
05:04 ndarshan joined #gluster
05:05 karnan joined #gluster
05:07 ankitraj joined #gluster
05:12 karthik_ joined #gluster
05:16 atinm joined #gluster
05:18 kukulogy joined #gluster
05:28 ramky joined #gluster
05:37 om joined #gluster
05:41 ppai joined #gluster
05:42 aspandey joined #gluster
05:44 derjohn_mobi joined #gluster
05:45 aravindavk joined #gluster
05:49 devyani7 joined #gluster
05:49 victori joined #gluster
05:51 Gnomethrower joined #gluster
05:52 RameshN joined #gluster
05:59 Manikandan joined #gluster
05:59 mhulsman joined #gluster
06:01 Manikandan joined #gluster
06:02 ashiq joined #gluster
06:03 mhulsman1 joined #gluster
06:06 hgowtham joined #gluster
06:06 skoduri joined #gluster
06:12 jkroon joined #gluster
06:14 msvbhat joined #gluster
06:17 jwd joined #gluster
06:18 itisravi joined #gluster
06:21 Muthu_ joined #gluster
06:23 jtux joined #gluster
06:26 satya4ever joined #gluster
06:31 hchiramm joined #gluster
06:41 kotreshhr joined #gluster
06:41 arcolife joined #gluster
06:54 robb_nl joined #gluster
06:56 victori joined #gluster
07:06 creshal joined #gluster
07:08 derjohn_mobi joined #gluster
07:13 d0nn1e joined #gluster
07:16 jtux joined #gluster
07:25 ivan_rossi joined #gluster
07:30 fsimonce joined #gluster
07:32 hgowtham joined #gluster
07:38 SOLDIERz joined #gluster
07:39 SOLDIERz hey everyone I got a cluster up and running with 12 nodes and a replica with 3 i want to add more nodes to the cluster but i want to ask if somebody already expanded it's cluster and can report about what happend after executing the rebalance command
07:40 SOLDIERz is there anything i should fear about
07:40 SOLDIERz any pitfalls
07:41 SOLDIERz i setup a test cluster and try it but is see there actually no impact but that's probably related to the amount of data i got in the test cluster
07:41 [diablo] joined #gluster
07:42 SOLDIERz the production system holds around 40 TB of data some probably can tell me about his/her experience of a rebalancing proccess?
07:46 deniszh joined #gluster
07:46 robb_nl joined #gluster
07:48 om joined #gluster
07:50 jri joined #gluster
07:50 ahino joined #gluster
07:51 jri left #gluster
07:53 beemobile joined #gluster
08:02 victori joined #gluster
08:03 jtux joined #gluster
08:03 SOLDIERz joined #gluster
08:04 prth joined #gluster
08:04 Muthu_ joined #gluster
08:12 beemobile joined #gluster
08:19 derjohn_mobi joined #gluster
08:24 jri_ joined #gluster
08:26 harish joined #gluster
08:31 mhulsman joined #gluster
08:32 mhulsman1 joined #gluster
08:34 ninkotech joined #gluster
08:34 ninkotech__ joined #gluster
08:37 itisravi joined #gluster
08:41 hgowtham joined #gluster
08:46 rastar joined #gluster
08:49 Mmike joined #gluster
08:49 Mmike joined #gluster
08:53 ninkotech__ joined #gluster
08:53 ninkotech_ joined #gluster
08:57 kdhananjay1 joined #gluster
09:01 victori joined #gluster
09:01 jri joined #gluster
09:03 jkroon joined #gluster
09:03 jri_ joined #gluster
09:04 ninkotech__ joined #gluster
09:04 ninkotech joined #gluster
09:09 kshlm joined #gluster
09:09 jri joined #gluster
09:11 jri_ joined #gluster
09:12 kdhananjay joined #gluster
09:18 jri joined #gluster
09:21 jkroon joined #gluster
09:21 nishanth joined #gluster
09:21 jri_ joined #gluster
09:27 nbalacha amye, there?
09:32 mhulsman joined #gluster
09:33 prth joined #gluster
09:33 misc nbalacha: likely still sleeping
09:34 glusterbot misc: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
09:34 mhulsman1 joined #gluster
09:34 nbalacha misc, thanks
09:34 misc glusterbot: that's not a naked ping, go get upgraded to skynet :p
09:43 aspandey joined #gluster
09:49 shubhendu joined #gluster
09:51 k4n0 joined #gluster
09:52 derjohn_mob joined #gluster
09:52 nishanth joined #gluster
09:53 ndarshan joined #gluster
09:53 karnan joined #gluster
09:54 Arrfab misc: isn't that your responsability to update it to skynet ? :-)
09:54 misc Arrfab: it is not packaged in EPEL
10:02 mhulsman joined #gluster
10:02 jri joined #gluster
10:04 victori joined #gluster
10:06 jri__ joined #gluster
10:11 itisravi_ joined #gluster
10:11 itisravi_ joined #gluster
10:13 suliba joined #gluster
10:14 derjohn_mob joined #gluster
10:22 nbalacha joined #gluster
10:25 [diablo] joined #gluster
10:27 mhulsman1 joined #gluster
10:33 Muthu_ joined #gluster
10:38 karnan joined #gluster
10:49 RameshN joined #gluster
10:51 ndarshan joined #gluster
10:56 johnmilton joined #gluster
10:57 ankitraj joined #gluster
11:11 msvbhat joined #gluster
11:12 RameshN joined #gluster
11:18 mshillam joined #gluster
11:21 victori joined #gluster
11:22 madmatuk joined #gluster
11:31 rwheeler joined #gluster
11:37 harish joined #gluster
11:41 kramdoss_ joined #gluster
11:43 ankitraj #info Gluster bug-traige meeting will start in 15 min on #gluster-meeting
11:48 raghug joined #gluster
11:50 beemobile joined #gluster
11:54 hgowtham joined #gluster
12:00 Piotr123 joined #gluster
12:03 shubhendu joined #gluster
12:16 side_control im not sure where to start troubleshooting this, under high load, nfs hangs, and the clients kpanic after a while (in VMs), bare-metal seem to act fine
12:18 side_control so, say a gluster node has two interfaces, infiniband and ethernet, peer probe was done on the ib interface, should mount -t glusterfs work on the ethernet interface?
12:23 unclemarc joined #gluster
12:23 k4n0 joined #gluster
12:24 karnan joined #gluster
12:33 Jules- joined #gluster
12:33 ira joined #gluster
12:34 mhulsman joined #gluster
12:37 om joined #gluster
12:37 om joined #gluster
12:37 mhulsman1 joined #gluster
12:37 Lee1092 joined #gluster
12:39 om joined #gluster
12:42 Philambdo joined #gluster
12:51 Philambdo joined #gluster
12:52 jri joined #gluster
12:56 derjohn_mob joined #gluster
12:56 Muthu_ joined #gluster
12:57 morse joined #gluster
12:58 baojg joined #gluster
13:05 jri_ joined #gluster
13:13 jri joined #gluster
13:15 atinm joined #gluster
13:16 hackman joined #gluster
13:23 squizzi joined #gluster
13:26 nishanth joined #gluster
13:26 emmajane joined #gluster
13:31 jwd joined #gluster
13:34 ron-slc joined #gluster
13:35 victori joined #gluster
13:40 ron-slc joined #gluster
13:41 Gnomethrower joined #gluster
13:42 post-factum anyone from Brno, CZ here?
13:45 Gnomethrower joined #gluster
13:45 skylar joined #gluster
13:48 jri_ joined #gluster
13:50 hackman joined #gluster
13:53 ashiq joined #gluster
13:54 shubhendu joined #gluster
13:56 jri joined #gluster
13:59 suliba joined #gluster
13:59 skoduri joined #gluster
14:00 plarsen joined #gluster
14:02 ZachLanich joined #gluster
14:05 johnmilton joined #gluster
14:06 kdhananjay joined #gluster
14:10 derjohn_mob joined #gluster
14:10 jri_ joined #gluster
14:14 archit_ joined #gluster
14:15 kotreshhr left #gluster
14:16 ppai joined #gluster
14:21 ndevos post-factum: I could probably find someone for you?
14:21 shyam joined #gluster
14:22 post-factum ndevos: pm
14:22 ndevos post-factum: sure!
14:27 jri joined #gluster
14:31 plarsen joined #gluster
14:34 jri_ joined #gluster
14:35 arcolife joined #gluster
14:40 ron-slc joined #gluster
14:42 prth joined #gluster
14:45 victori joined #gluster
14:46 shaunm joined #gluster
14:47 jri__ joined #gluster
14:47 atinm joined #gluster
14:49 ppai joined #gluster
14:50 jri joined #gluster
14:50 raghu` joined #gluster
14:52 mhulsman joined #gluster
14:55 jri_ joined #gluster
14:55 aravindavk joined #gluster
14:57 jri joined #gluster
14:58 jri__ joined #gluster
14:58 jbrooks joined #gluster
15:00 jri joined #gluster
15:02 hagarth joined #gluster
15:03 atrius- joined #gluster
15:03 kdhananjay Ulrar: there?
15:04 Ulrar kdhananjay: Yes
15:04 jri_ joined #gluster
15:04 kdhananjay Ulrar: OK. Do you know if this works fine with FUSE?
15:04 kdhananjay Ulrar: Not having client logs makes it difficult to guess what could be happening :)
15:04 Ulrar kdhananjay: No idea sorry
15:04 Ulrar Yeah I understand
15:05 Ulrar But proxmox doesn't have the clients logs, unfortunatly
15:05 Ulrar And I can't test it again
15:05 kdhananjay Ulrar: ok then let's start from the beginning.
15:06 kdhananjay Ulrar: you added 3 bricks, retaining the replica-count, correct?
15:06 Ulrar Correct
15:06 jri__ joined #gluster
15:06 Ulrar gluster volume add-brick VMs brick1 brick2 brick3
15:06 kdhananjay Ulrar: ... and?
15:07 Ulrar I pressed enter and everything got corrupted
15:07 Ulrar Instantly
15:07 Ulrar (I had a shell on one of the VMs, I saw the I/O errors start)
15:07 kdhananjay Ulrar: that's odd! :-o
15:07 wushudoin joined #gluster
15:07 Ulrar Yeah, I wasn't expecting it :/
15:08 kdhananjay Ulrar: ok let me try this (without a vm setup).
15:08 Ulrar If that matters it's 3.7.13 on debian
15:08 wushudoin joined #gluster
15:08 kdhananjay Ulrar: ah! 3.7.13!
15:08 kdhananjay Ulrar: i vaguely remember there were some issues with gfapi and graph switch.
15:08 kdhananjay Ulrar: let me confirm.
15:09 kdhananjay rastar: there?
15:11 kdhananjay no, the gfapi issues are supposed to have been *fixed* in 3.7.13
15:11 jri joined #gluster
15:12 Ulrar Mh
15:12 kdhananjay Ulrar: reboot had no effect on the state of the vms?
15:12 Ulrar Trying to remember if those servers were installed on 3.7.13 or 12 at first
15:12 Ulrar kdhananjay: None, most of them couldn't start again after having been powered off
15:14 Ulrar Anyway all those VMs are newly migrated on those servers so even if I had installed 3.7.12 at first, the VMs have been rebooted since
15:14 Ulrar Of that I am sure
15:14 Ulrar So 3.7.13 for both the servers and the lib
15:16 aravindavk joined #gluster
15:16 [fre] guys?
15:17 [fre] what's the easiest (read: safest) way to stop replication for 1 volume?
15:17 BitByteNybble110 joined #gluster
15:18 kdhananjay ok
15:18 rastar kdhananjay: in a meeting
15:18 kdhananjay rastar: nvm
15:18 mhulsman joined #gluster
15:19 Muthu_ joined #gluster
15:25 bowhunter joined #gluster
15:26 derjohn_mob joined #gluster
15:27 Sebbo1 joined #gluster
15:27 Wojtek944 joined #gluster
15:27 Wojtek944 [2016-09-06 15:27:32.008956] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-gv0-client-1: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [Transport endpoint is not connected]
15:27 Wojtek944 [2016-09-06 15:27:32.009277] E [MSGID: 114031] [client-rpc-fops.c:1727:client3_3_entrylk_cbk] 0-gv0-client-1: remote operation failed [Transport endpoint is not connected]
15:27 Wojtek944 [2016-09-06 15:27:32.009489] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-gv0-client-1: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [Transport endpoint is not connected]
15:28 Wojtek944 I'm having this loop endlessly in my gv0 logs, any ideas why that is? The mount is fine, and all bricks show up online in gluster volume status gv0
15:29 Wojtek944 I have restarted all gluster processes on this cluster but that did not stop this behavior
15:31 Gambit15 Wojtek944, is it happening on all nodes, or one in particular?
15:31 Gambit15 I had something similar to this with one of my nodes at one point, and a reboot fixed it
15:31 squizzi joined #gluster
15:35 nbalacha joined #gluster
15:36 bluenemo joined #gluster
15:36 Wojtek944 one node only, let me try the reboot
15:37 rafi1 joined #gluster
15:39 Wojtek944 well, windows-style reboot seems to have worked :)
15:41 jri_ joined #gluster
15:43 mhulsman joined #gluster
15:46 mhulsman1 joined #gluster
15:47 skoduri joined #gluster
15:49 madmatuk Hi guys, we have a 2 node gluster up and running with 3 bricks per node all added to our gv0 volume with repliaca 2. We wish to add 3 more nodes to our gluster instance, with 3 bricks in each. When we add these new bricks are we required to do a rebalance, or is this not necessary with a replica 2 type of volume?
15:52 jri joined #gluster
15:53 plarsen joined #gluster
15:55 rafi2 joined #gluster
16:04 side_control is there any way to change the vol interface?  glusterfs only works on one interface currently but i need it to work on another
16:06 ndevos side_control: you can do it with routing tricks, split-horizon-dns or even firewall rules
16:06 atrius- joined #gluster
16:06 side_control ndevos: what about changing the volume ip and peer addresses?
16:06 side_control ndevos: it seems like its the internal comms that is hosing up fuse
16:06 msvbhat joined #gluster
16:06 kdhananjay Ulrar: i need brick logs too
16:06 kdhananjay Ulrar: ive dropped you a mail.
16:07 ndevos side_control: using hostnames is strongly recommended, I dont think you can easily change ip-addresses otherwise
16:08 side_control ndevos: i have two interfaces, eth0 and ib0 (infiniband), of course i want gluster connections on infiniband (40G link), but should glusterfs work on eth0?
16:08 ndevos side_control: you can look under /var/lib/gluster and use 'find' and 'grep' to search for the wrong/old ip
16:08 side_control ndevos: its not an old or wrong ip, its just that each node has two ips, and im trying to serve vols to the ethernet network
16:08 ndevos side_control: gluster only uses whatever is setup on the system for routing and hostname resolving
16:08 d-fence joined #gluster
16:09 d-fence_ joined #gluster
16:09 side_control ndevos: ok, gluster peer status, contains aliases though... i get it though, weird
16:09 side_control s/weird/annoying
16:10 ndevos side_control: the 'gluster peer' is a little different, it can do multiple hostnames/IPs for the same system
16:10 ndevos side_control: it is the 'gluster volume create' command and 'gluster volume info' that contains the hostnames/IPs the clients use to access the bricks
16:11 ankitraj joined #gluster
16:12 side_control ndevos: could i recreate the brick and use the other interface then?
16:12 side_control erm other hostname?
16:12 kdhananjay rastar: any known issues with gfapi graph switch in 3.7.13?
16:13 side_control since i have host.domain.com and host.ib.domain.com for the ib interface?
16:13 side_control ndevos: should that work?
16:13 kdhananjay rastar: wondering whether poornima's patch was available in 3.7.13 or 3.7.14
16:13 ahino1 joined #gluster
16:13 rastar kdhananjay: poornima fixed a graph change bug recently
16:13 rastar kdhananjay: I need to check, checking
16:13 ndevos side_control: you would need to recreate/edit the files under /var/lib/glusterd/vols/<volume> - edit the files on all gluster servers while not one glusterd is running (and try it in a test env)
16:14 side_control ndevos: thanks, i'll give that a whirl
16:14 rastar kdhananjay: https://bugzilla.redhat.co​m/show_bug.cgi?id=1367294 3.7.15
16:14 glusterbot Bug 1367294: medium, unspecified, ---, oleksandr, CLOSED CURRENTRELEASE, IO ERROR when multiple graph switches
16:14 side_control ndevos: heard of any issues with nfs crashing under load? this is on 3.8.3
16:14 kdhananjay rastar: Ulrar is seeing some VM corruption issues with 3.7.13 right after add-brick. my guess is it could be hte same gfapi issue.
16:15 ndevos side_control: no, not that I can remember... please file a bug for that and we can take a look
16:15 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
16:15 kdhananjay Ulrar: there?
16:16 ndevos side_control: I'm aware of some bug related to Windows/NFS, but nobody is able to reproduce it, hence fixing is a bit of guess work...
16:16 rastar kdhananjay: the bug would have resulted in a EIO, if hypervisor would not handle it then it is possible.
16:16 kdhananjay rastar: he did mention he saw io errors from within the vms.
16:17 rastar kdhananjay: however, that was a bug for quite some time
16:17 rastar kdhananjay: if Ulrar has started seeing this recently, we should investigate more
16:17 kdhananjay rastar: agree, but i am guessing this is the first time he's adding bricks, in other words doing graph switch.
16:18 kdhananjay rastar: i remember even previously some users had reported issues with remove-brick + add-brick
16:18 rastar kdhananjay: that is possible. Need to upgrade first and check. 3.7.15 has the fix.
16:20 kdhananjay Ulrar: is upgrading an option for you?
16:22 hagarth joined #gluster
16:23 Gambit15 joined #gluster
16:39 riyas joined #gluster
16:39 Ulrar kdhananjay: Sorry, we won't be trying again soon. We lost so much data and the clients are mad, we decided to do multiple volumes instead of one
16:39 Ulrar If I ever get enough servers to try I will though, but that won't be soon
16:40 kdhananjay Ulrar: OK.
16:41 Ulrar If I ever get the chance to try that again I will send a mail though. If it keeps expanding at this rate it might be in two or three months :)
16:41 Ulrar Let's hope
16:42 kdhananjay Ulrar: Sure. Thanks. :)
16:43 Ulrar Thank you for taking a look
16:43 kdhananjay Ulrar: np!
16:44 kdhananjay rastar++ too for confirming the existence of the issue in 3.7.13
16:44 glusterbot kdhananjay: rastar's karma is now 2
16:44 Ulrar rastar++ kdhananjay++
16:44 glusterbot Ulrar: rastar's karma is now 3
16:44 glusterbot Ulrar: kdhananjay's karma is now 8
16:44 kdhananjay Ulrar++
16:44 glusterbot kdhananjay: Ulrar's karma is now 3
16:51 squizzi joined #gluster
17:07 ivan_rossi left #gluster
17:29 pdrakeweb joined #gluster
17:33 plarsen joined #gluster
17:42 k4n0 joined #gluster
17:51 Philambdo1 joined #gluster
17:54 ZachLanich joined #gluster
17:58 BitByteNybble110 joined #gluster
17:59 gnulnx Hey-o.  If I have a distributed volume with 2 bricks, and I want to migrate all of the data on brick 1 to brick 1, can I just do a remove-brick?
17:59 gnulnx on brick 1 to brick 2*
18:01 JoeJulian gnulnx: That's supposed to work, yes.
18:01 JoeJulian Obviously you have to remove-brick...start, wait until it's complete (remove-brick...status) and then commit.
18:01 gnulnx Yup, testing the procedure now.
18:02 madmatuk Hi guys, we have a 2 node gluster up and running with 3 bricks per node all added to our gv0 volume with replica 2. We wish to add 3 more nodes to our gluster instance, with 3 bricks in each. When we add these new bricks to the volume are we required to do a rebalance, or is this not necessary with a replica 2 type of volume?
18:05 B21956 joined #gluster
18:05 JoeJulian madmatuk: Do you have a plan for designating replicas? Your answer is, Yes, when you add distribute subvolumes, that's when you would rebalance (or at least rebalance...fix-layout).
18:07 madmatuk JoeJulian Our current setup was created like so : gluster volume create gv0 replica 2 gluster1:/export/sdb1/brick gluster2:/export/sdb1/brick gluster1:/export/sdc1/brick gluster2:/export/sdc1/brick gluster1:/export/sdd1/brick gluster2:/export/sdd1/brick
18:08 madmatuk the total space of this volume is 14.6TB, does that mean when adding the new bricks from new nodes, that the data will be moved around to balance it out then?
18:08 gnulnx JoeJulian: Procedure seems to work, however the 'status' command shows 0 rebalanced-files (even though all the files were rebalanced)
18:09 JoeJulian gnulnx: sounds like a bug.
18:09 JoeJulian madmatuk: That's what rebalance does, yes.
18:09 madmatuk JoeJulian Ok, and this must be done i presume?
18:10 JoeJulian madmatuk: Or you can just rebalance...fix-layout and the dht hashes will be re-allocated without moving data. That will allow the new bricks to be used.
18:10 JoeJulian You must, at minimum, do the rebalance...fix-layout.
18:11 madmatuk JoeJulian ok this sounds good. Will that then take into account that there is more space on the other bricks and use accordingly therafter?
18:12 victori joined #gluster
18:12 JoeJulian Nope
18:13 JoeJulian madmatuk: To see how dht works, see https://joejulian.name/blog​/dht-misses-are-expensive/ or https://en.wikipedia.org/w​iki/Distributed_hash_table
18:13 glusterbot Title: DHT misses are expensive (at joejulian.name)
18:13 msvbhat joined #gluster
18:13 madmatuk JoeJulian Thanks i will have a read.
18:18 shyam joined #gluster
18:39 nobody481 joined #gluster
18:43 nobody481 I'm installing gluster for the first time on 4 separate computers, 3 Ubuntu with gluster version 3.7.15 and 1 FreeBSD with gluster version 3.7.6.  I'm trying to get the peers connected to each other.  The 3 Ubuntu computers connect to each other with no problem.  I cannot get the FreeBSD peer to connect, when I try I get the error "peer probe: failed: Peer does not support required op-version".  Google
18:43 nobody481 searches haven't helped me solve the problem.  Any ideas how to solve this problem?
18:44 shyam joined #gluster
18:44 gnulnx nobody481: freebsd version is too old and doesn't suppor the op-version that the rest of the cluster is set to (as the other servers are newer)
18:45 gnulnx Upgrade freebsd version is your best bet
18:46 nobody481 gnulnx: Unfortuantely that is the most recent version available in the FreeBSD ports tree.  Is there a way to manually configure the op-version of the rest of the cluster?
18:46 gnulnx nobody481: Yeah true.  I compiled gluster on bsd, pretty trivial to do so.
18:46 gnulnx Yes you can set the op-version option.  I'm not sure if you can downgrade it (I haven't tried that)
18:47 nobody481 gnulnx: Would I set op-version in the glusterd.vol config file?
18:51 nobody481 gnulnx: Or, I can try compiling gluster manually.  What installation package should I get for that?  The standard Ubuntu .tar.gz file somewhere?
18:52 gnulnx nobody481: https://gist.github.com/kylejohnso​n/8fd7c8c6e84b52121f37628354ed895b is how I do it
18:52 glusterbot Title: compiling glusterfs on freebsd · GitHub (at gist.github.com)
18:55 ahino joined #gluster
18:56 ashiq joined #gluster
19:01 nobody481 gnulnx: Thank you, I'll check it out
19:02 rideh joined #gluster
19:06 mhulsman joined #gluster
19:10 mhulsman joined #gluster
19:13 victori_ joined #gluster
19:14 rafi joined #gluster
19:15 Gambit15 joined #gluster
19:23 wushudoin joined #gluster
19:24 StarBeast joined #gluster
19:49 johnmilton joined #gluster
19:49 samikshan joined #gluster
20:06 deniszh joined #gluster
21:07 shyam joined #gluster
21:07 shyam left #gluster
21:23 d0nn1e joined #gluster
21:24 coreping joined #gluster
21:29 guhcampos joined #gluster
21:31 deniszh1 joined #gluster
21:32 nobody481 gnulnx: Please forgive my ignorance, I've gotten to the point of running ./configure, but that fails with an error on line 13523.  I can't figure out what the syntax problem is.
22:03 plarsen joined #gluster
22:25 hackman joined #gluster
23:39 ZachLanich joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary