Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 zcourts joined #gluster
00:04 ronrib joined #gluster
00:10 zcourts joined #gluster
00:22 vbellur joined #gluster
00:22 vbellur joined #gluster
00:26 ws2k3 joined #gluster
00:27 major joined #gluster
00:32 jkroon joined #gluster
00:51 bwerthmann joined #gluster
00:53 jkroon joined #gluster
01:03 kramdoss_ joined #gluster
01:10 shyam joined #gluster
01:15 yosafbridge joined #gluster
01:20 pioto_ joined #gluster
01:53 winrhelx joined #gluster
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:27 winrhelx joined #gluster
02:56 ppai joined #gluster
03:33 nbalacha joined #gluster
03:43 gyadav joined #gluster
03:44 shdeng joined #gluster
03:48 dominicpg joined #gluster
03:51 jkroon joined #gluster
04:03 DV joined #gluster
04:24 shortdudey123 joined #gluster
04:25 Humble joined #gluster
04:30 apandey joined #gluster
04:32 jiffin joined #gluster
04:34 skumar joined #gluster
04:40 masber joined #gluster
04:51 Shu6h3ndu joined #gluster
04:57 gospod2 joined #gluster
05:00 karthik_us joined #gluster
05:04 kdhananjay joined #gluster
05:16 ndarshan joined #gluster
05:17 kdhananjay joined #gluster
05:27 sanoj joined #gluster
05:35 aravindavk joined #gluster
05:44 Saravanakmr joined #gluster
05:44 decayofmind joined #gluster
05:49 susant joined #gluster
05:49 hgowtham joined #gluster
05:54 susant joined #gluster
05:55 msvbhat joined #gluster
05:58 Prasad_ joined #gluster
06:11 rastar_ joined #gluster
06:26 jtux joined #gluster
06:27 msvbhat joined #gluster
06:38 kdhananjay joined #gluster
06:39 skoduri joined #gluster
06:40 bkunal__ joined #gluster
06:49 Vaelatern joined #gluster
06:56 bEsTiAn joined #gluster
07:01 omie888777 joined #gluster
07:03 HTTP_____GK1wmSU joined #gluster
07:04 HTTP_____GK1wmSU left #gluster
07:05 kdhananjay joined #gluster
07:09 apandey joined #gluster
07:09 aravindavk joined #gluster
07:18 ivan_rossi joined #gluster
07:19 HTTP_____GK1wmSU joined #gluster
07:20 HTTP_____GK1wmSU left #gluster
07:27 ronrib_ joined #gluster
07:37 sanoj_ joined #gluster
07:38 major joined #gluster
07:48 apandey_ joined #gluster
07:50 apandey joined #gluster
07:52 rwheeler joined #gluster
07:58 kraynor5b_ joined #gluster
07:59 mbukatov joined #gluster
08:05 msvbhat joined #gluster
08:07 mrw___ joined #gluster
08:18 ThHirsch joined #gluster
08:27 _KaszpiR_ joined #gluster
08:28 msvbhat joined #gluster
08:43 sanoj joined #gluster
08:48 kdhananjay joined #gluster
08:52 hgowtham joined #gluster
08:55 _KaszpiR_ joined #gluster
08:55 baojg joined #gluster
08:59 rastar_ joined #gluster
09:02 MrAbaddon joined #gluster
09:22 msvbhat joined #gluster
09:26 buvanesh_kumar joined #gluster
09:33 bkunal joined #gluster
09:34 hgowtham joined #gluster
09:36 rafi joined #gluster
09:42 aravindavk joined #gluster
09:44 sanoj joined #gluster
09:44 HTTP_____GK1wmSU joined #gluster
09:45 HTTP_____GK1wmSU left #gluster
09:59 kramdoss_ joined #gluster
10:19 rafi1 joined #gluster
10:30 nbalacha joined #gluster
10:30 poornima_ joined #gluster
10:34 shyam joined #gluster
10:38 kramdoss_ joined #gluster
10:45 aravindavk joined #gluster
10:48 bkunal joined #gluster
10:58 _KaszpiR_ joined #gluster
10:59 aardbolreiziger joined #gluster
11:00 baber_ joined #gluster
11:10 bfoster joined #gluster
11:13 ahino joined #gluster
11:14 alvinstarr joined #gluster
11:19 nbalacha joined #gluster
11:31 kramdoss_ joined #gluster
11:35 shyam joined #gluster
11:41 aravindavk joined #gluster
11:42 Eilyre joined #gluster
11:42 ahino1 joined #gluster
11:43 jkroon joined #gluster
11:47 ThHirsch joined #gluster
12:06 shaunm joined #gluster
12:12 ThHirsch joined #gluster
12:14 _KaszpiR_ joined #gluster
12:26 susant joined #gluster
12:31 shyam joined #gluster
12:36 ThHirsch joined #gluster
12:38 decayofmind joined #gluster
12:53 jiffin1 joined #gluster
12:58 ahino joined #gluster
13:02 dominicpg joined #gluster
13:03 baojg joined #gluster
13:03 ahino1 joined #gluster
13:10 atinm joined #gluster
13:12 anthony25 joined #gluster
13:13 ic0n joined #gluster
13:18 jstrunk joined #gluster
13:19 baber_ joined #gluster
13:22 aravindavk joined #gluster
13:23 skylar joined #gluster
13:23 skylar joined #gluster
13:34 susant joined #gluster
13:39 nbalacha joined #gluster
13:46 anthony25 joined #gluster
13:52 kotreshhr joined #gluster
13:54 decayofmind joined #gluster
13:57 shyam joined #gluster
14:05 major joined #gluster
14:05 hosom joined #gluster
14:10 plarsen joined #gluster
14:10 ic0n joined #gluster
14:12 aronnax joined #gluster
14:25 kdhananjay joined #gluster
14:26 ic0n joined #gluster
14:27 farhorizon joined #gluster
14:29 bigpic joined #gluster
14:33 kotreshhr joined #gluster
14:36 nbalacha joined #gluster
14:40 petruzzo joined #gluster
14:40 baber_ joined #gluster
14:49 farhorizon joined #gluster
14:54 farhorizon joined #gluster
15:02 fbred joined #gluster
15:02 fbred Hey, I'm having trouble mount a volume, and I can see the error: Cache size 1073741824 is greater than the max size of 1041186816. Is there a way to increase the client cache size?
15:05 baber_ joined #gluster
15:06 mbrandeis joined #gluster
15:06 wushudoin joined #gluster
15:12 aravindavk joined #gluster
15:13 kpease joined #gluster
15:19 farhorizon joined #gluster
15:24 saali joined #gluster
15:25 ic0n joined #gluster
15:36 ahino joined #gluster
15:43 vbellur joined #gluster
15:44 vbellur joined #gluster
15:44 vbellur joined #gluster
15:45 vbellur joined #gluster
15:47 baber_ joined #gluster
15:48 vbellur joined #gluster
15:49 vbellur joined #gluster
15:50 vbellur joined #gluster
15:53 vbellur joined #gluster
15:53 vbellur joined #gluster
15:54 mbrandeis joined #gluster
15:54 vbellur joined #gluster
15:55 vbellur joined #gluster
15:56 vbellur1 joined #gluster
15:57 ivan_rossi left #gluster
16:01 gyadav joined #gluster
16:04 vbellur joined #gluster
16:04 vbellur joined #gluster
16:06 vbellur1 joined #gluster
16:08 jiffin joined #gluster
16:16 jiffin joined #gluster
16:23 anthony25 joined #gluster
16:32 Wayke91_ joined #gluster
16:33 msvbhat joined #gluster
16:39 Wayke91_ Hey guys, I have an issue with my distributed-replicated volume:  I'm seeing a lot of duplicate, empty files across the volume.  It started after a replace-brick, it's still healing at the moment so I'm not leaning towards blindly deleting them without another opinion.
16:49 JoeJulian Wayke91_: Are you seeing those from a client or just on the bricks themselves?
16:50 Wayke91_ both
16:51 rafi joined #gluster
16:52 JoeJulian what version?
16:52 Wayke91_ 3.10 on centos 7.3
16:53 omie888777 joined #gluster
16:59 jiffin joined #gluster
17:00 JoeJulian Wayke91_: I see a bunch of bug fixes related to rebalance and dht since 3.10. You might consider upgrading.
17:02 Wayke91_ Well, I got some more info.  It seems that the brick used to replace the failed one wasn't empty, it had existing data from the volume.  Once they saw the duplicates appearing, they apparently formatted the brick and began a heal.
17:07 JoeJulian oops
17:07 Wayke91_ yeah no kidding
17:08 Wayke91_ So now that I have that, I guess then I should be OK to delete the dups via client, right?
17:10 JoeJulian I wouldn't.
17:11 JoeJulian You have no way to be sure which file is going to be deleted.
17:11 Wayke91_ neat
17:11 JoeJulian If it was me, I'd stop the rebalance and delete the dht-linkto files from the bricks.
17:11 jiffin joined #gluster
17:12 jkroon joined #gluster
17:13 Wayke91_ OK, and what exactly is the dht-linkto file?  Is that the link that shows up as the file in the brick, or is it the backend stuff in .glusterfs folder?
17:14 Wayke91_ I'm still a bit new to Gluster I'm afraid.
17:16 ron-slc joined #gluster
17:28 rafi1 joined #gluster
17:28 btspce joined #gluster
17:28 btspce_ joined #gluster
17:29 h4rry joined #gluster
17:32 jiffin1 joined #gluster
17:39 Wayke91_ JoeJulian: You seem to have answered my question, I'm reading your blog on the subject
17:44 garbageyard joined #gluster
17:46 garbageyard Hello. I am running GitLab in container and using GlusterFS for persistence. Today, when i started my GitLab instance, i am getting error for all files present under /home/git/data/ssh dir
17:47 garbageyard For example, "chown: changing ownership of '/home/git/data/ssh/ssh_host_key': Input/output error"
17:49 jiffin joined #gluster
17:49 garbageyard Any idea why this error?
17:55 btspce_ left #gluster
17:56 JoeJulian garbageyard: Check "gluster volume status" to make sure your bricks are running. Check the client log for the volume mount.
17:58 _KaszpiR_ joined #gluster
17:59 garbageyard @JoeJulian: Two bricks residing in different volumes are showing N in Online column ouput.
18:00 JoeJulian Well that's probably why then. You can try to diagnose it or you can just try to force them to restart: "gluster volume start $volname force"
18:00 JoeJulian I would diagnose it.
18:02 garbageyard Ok. Let me check the logs. :)
18:04 _KaszpiR_ joined #gluster
18:11 Wayke91_ JoeJulian: So If I'm understanding you correctly, I've stopped my volume.  Next, I've ran a find against my bricks to find the -size 0.  I've taken a few samples and verified that I had those same files that were the correct size in the bricks.  I then ran "getfattr -n trusted.glusterfs.dht -e hex" against all the files with the same name.  I came
18:11 Wayke91_ up with: trusted.glusterfs.dht: No such attribute
18:12 MrAbaddon joined #gluster
18:12 Wayke91_ At this point, I'm unsure what to do next.  Are the empty files on the brick the dht-linkto files?
18:13 garbageyard @JoeJulian: I am actually running GitLab on RancherOS which allows only containers to run. So in this case, i used this link to setup GlusterFS (https://github.com/gluster/gluster-kubernetes/tree/master/docs/examples/dynamic_provisioning_external_gluster). I used Docker image by Humble (https://hub.docker.com/r/gluster/gluster-centos/~/dockerfile/) to provision GlusterFS clients (as containers) on 3 Kubernetes nodes. When i looked right now, i sa
18:13 garbageyard w that the client containers were in stopped state. Now when i restarted them and got inside one of the container, i only see "/usr/sbin/init" process running in the output of "ps -ef"
18:14 garbageyard If i run "systemctl enable glusterd.service", i get error "Failed to get D-Bus connection: No such file or directory"
18:16 JoeJulian is /usr/sbin/init a symlink to systemd?
18:16 garbageyard Yes
18:16 garbageyard lrwxrwxrwx 1 root root 22 May 29 08:13 /usr/sbin/init -> ../lib/systemd/systemd
18:16 JoeJulian cool
18:17 JoeJulian systemctl's going to fail to do anything if dbus isn't running... hmm.
18:17 garbageyard Ok. So by dbus you mean systemd?
18:18 rafi joined #gluster
18:19 JoeJulian dbus (dbus-daemon) is a service used by systemd.
18:20 garbageyard Ok. So how can i start the dbus-daemon if systemctl itself isn't working?
18:20 JoeJulian That's the thing... it's supposed to be started by systemd.
18:23 JoeJulian Maybe your docker image is corrupted? Set imagePullPolicy=Always and see if it fixes it?
18:23 Humble $ docker run -v /etc/glusterfs:/etc/glusterfs:z -v /var/lib/glusterd:/var/lib/glusterd:z -v /var/log/glusterfs:/var/log/glusterfs:z -v /sys/fs/cgroup:/sys/fs/cgroup:ro -d --privileged=true --net=host gluster/gluster-centos
18:24 Humble garbageyard, can you try something like above ?
18:24 baber_ joined #gluster
18:24 garbageyard Sure
18:24 JoeJulian Oh, hey, you're here. :D
18:24 garbageyard :D
18:24 Humble just create /etc/glusterfs /var/lib/glusterd /var/log/glusterfs in the host where you run above command
18:24 bwerthmann joined #gluster
18:25 garbageyard Ok
18:25 Humble JoeJulian, I was about to sleep , but cant resist myself to avoid this by seeing your attempt to help !
18:25 Humble JoeJulian++ thanks!
18:25 garbageyard Those dir are already there. I'll try the command.
18:25 glusterbot Humble: JoeJulian's karma is now 38
18:26 JoeJulian Humble: You'll be at Gluster Summit, right?
18:26 Humble JoeJulian, yeah, planning to be there .. have a presentation as well..
18:27 JoeJulian I should know that...
18:27 Humble JoeJulian, u r for sure , isnt it ?
18:28 ThHirsch joined #gluster
18:28 JoeJulian I am for sure, yes
18:28 Humble we will have a the "walk" as last time and chat about some thing :)
18:28 JoeJulian I'm really looking forward to picking your and lpabon's brains.
18:29 garbageyard Still the same issue
18:29 Humble "very small or zombie brain" and easy to pick it up :)
18:29 JoeJulian lol
18:29 Humble garbageyard, what u see now
18:29 Humble same dbus error ?
18:30 garbageyard Only /usr/sbin/init running
18:30 Humble ok..
18:30 Humble then whats the error ?
18:30 amye Humble, which session? I'm forgetting it
18:30 Humble could be something like "CNS " -. container native storage
18:30 Humble :)
18:30 garbageyard Same d-bus error
18:31 amye Oh, ok, that was listed as Michael Adam.
18:31 Humble yeah..
18:31 Humble garbageyard, which version of docker is running in your setup ?
18:31 Humble garbageyard, oh.. wait
18:31 Humble we only need to run "init"
18:31 JoeJulian I was wondering why it wasn't Humble or Louis, but then I just figured I must not actually know everybody.
18:32 garbageyard Docker version: 1.12.6
18:32 JoeJulian meeting... bbl
18:32 Humble JoeJulian, we are a team , so its fine :)
18:32 Humble garbageyard, init will take care of other service
18:32 Humble what you can do here
18:33 Humble do "exec" into the running container
18:33 Humble and check "systemctl status glusterd"
18:34 garbageyard The output that i provide above related to d-bus error was from within the container itself
18:34 Humble so let me start from scratch
18:34 Humble so u ran above command
18:34 Humble and docker ps shows container is runnning
18:35 garbageyard Yes
18:35 Humble and if you get inside the container
18:35 Humble and try "status" not "enable"
18:35 Humble you get error ?
18:35 Humble JoeJulian, cya
18:35 garbageyard Yes, i get d-bus error
18:35 garbageyard ...when i get inside container and run systemctl command
18:36 rafi joined #gluster
18:37 garbageyard I hope this has nothing to do with the host OS
18:37 Humble garbageyard, can you please pastebin docker ps output ?
18:38 garbageyard Sure
18:38 Humble https://paste.fedoraproject.org/paste/O6660C98FmThECEXUmPZZQ/deactivate/gBxUDfoyUqP4bEhZ1sO7vZSGWvdu0LubUM9z99dc0dbiDizEJBk2jtH9ksoTH1Ic
18:38 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
18:38 rafi joined #gluster
18:38 Humble garbageyard, I gave a try in my f26 system
18:39 garbageyard https://pastebin.com/EzDwK5Bh
18:39 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:40 Humble garbageyard, can you pass "docker inspect" as well
18:40 garbageyard Were you able to see the pastebin one?
18:40 Humble yeah, I noticed an attempt 9 minutes back
18:41 Humble because that dbus mostly comes because of "privilge" or due to sysfs cgroup mount
18:42 garbageyard Yes, i guess i have seen this mentioned on some forums as well.
18:42 garbageyard Shall i update the same pastebin?
18:42 Humble yep..
18:42 garbageyard Thanks
18:43 garbageyard Done
18:44 Humble garbageyard, which host OS is this ?
18:44 garbageyard RancherOS :(
18:45 garbageyard Supports everything as containers only. Bare minimum utilites installed
18:46 garbageyard This worked on all the three nodes when i first ran your image
18:46 Humble yeah, I was reading the chat logs
18:46 Humble so it was working at one stage
18:46 garbageyard Yes
18:46 Humble and also I see you were able to pick the volume status for bricks when you had a discussion with JoeJulian
18:47 Humble in that case "glusterd" was enabled and running
18:47 Humble isnt it ?
18:47 garbageyard Gluster is running on separate hosts (3 instances) outside of Kubernetes
18:48 garbageyard I just created volumes on Gluster host usking Heketi and mapped the GitLab path uisng PersistenVolumeClaim (Kubernetes)
18:49 Humble hmm. then why these gluster containers are used , if you are running gluster outside kube cluster ?
18:49 garbageyard Here, on Kubernetes, i used your image to run as client
18:49 Humble oh.. so its meant for just mounting gluster share ?
18:49 Humble not as server ?
18:49 garbageyard https://github.com/gluster/gluster-kubernetes/tree/master/docs/examples/dynamic_provisioning_external_gluster#validate-communication-and-gluster-prerequisites-on-the-kubernetes-nodes
18:49 glusterbot Title: gluster-kubernetes/docs/examples/dynamic_provisioning_external_gluster at master · gluster/gluster-kubernetes · GitHub (at github.com)
18:50 garbageyard This section says i need to have Gluster client running
18:50 garbageyard ...so
18:50 Humble garbageyard, ok.. let me share some details
18:50 Humble if gluster volumes are hosted outside gluster cluster
18:50 Humble and if yu have a heketi running
18:50 Humble and once you have the claim created
18:51 Humble the rest is "mounting " the share for app pod
18:51 garbageyard That mounting was also done when i created deployment
18:51 Humble for that you actually dont need gluster server container
18:52 Humble in kube the mount will happen on the node where app pod is running
18:52 Humble what u need there is "mount.glusterfs" binary
18:52 garbageyard Oh...is it? So i think that's the reason why things were working fine till now because many a times i noticed that this container was showing in stopped state
18:53 garbageyard But that mount.glusterfs is not present on the host OS
18:53 garbageyard How come it still worked?
18:53 garbageyard When this container ran for the first time and i logged into that container, there i saw that binary
18:53 Humble then in your app pod or where you mount the share that binary is present
18:54 Humble yeah..
18:54 garbageyard So does it mean that the container was needed for the first time?
18:54 garbageyard ...this is kind of confusing
18:54 garbageyard :(
18:55 Humble ideally you could use any container which has "glusterfs-fuse" in it
18:55 garbageyard So running that container is a prerequisite at least when setting things up, is it?
18:56 Humble docker pull gluster/glusterfs-client
18:56 garbageyard Ok
18:56 Humble even above client is capable of mounting lguster shares
18:56 Humble I am not familiar with rancheros
18:56 Humble so may be missing something in the setup
18:56 Humble however the normal workflow is that
18:57 garbageyard I can understand that...most people haven't heard about it
18:57 Humble if you are using a host OS like fedora/centos/rhel
18:57 garbageyard Ok. Thanks for that but coming back to the issue of input/output error...what should i do then? :)
18:57 Humble what kube plugin does is, mount the share in the host where app pod is scheduled and then bindmounting that to "application pod"
18:58 Humble in short something like "-v <hostmount>:<containermount>
18:58 garbageyard Hmmm...thanks for the explanation Humble...got that
18:58 garbageyard For the error related to input/output, how can i debug then?
18:59 Humble garbageyard, I havent read all the chat
19:00 Humble however u r saying, you have noted brick process down in your gluster cluster?
19:00 garbageyard Today, when i started my GitLab instance, i am getting error for all files present under /home/git/data/ssh dir
19:00 garbageyard Yes
19:00 garbageyard For example, "chown: changing ownership of '/home/git/data/ssh/ssh_host_key': Input/output error"
19:01 garbageyard Then Joe asked me to check brick status
19:01 Humble yep, its valid
19:01 garbageyard Error?
19:01 Humble or are you able to mount the same share in any other server ?
19:02 garbageyard Any other server?
19:03 garbageyard GitLab is running on one host only.
19:03 garbageyard It's part of Kubernetes but a single node cluster as of now
19:03 vbellur joined #gluster
19:03 Humble '/home/git/data/ssh/ssh_host_key': Input/output error" vbellur what can be checked for this error ? :)
19:04 Humble on a glusterfs share
19:07 ahino joined #gluster
19:10 vbellur1 joined #gluster
19:14 vbellur joined #gluster
19:20 kjackal joined #gluster
19:20 vbellur1 joined #gluster
19:22 vbellur joined #gluster
19:23 vbellur1 joined #gluster
19:24 vbellur joined #gluster
19:26 vbellur1 joined #gluster
19:26 vbellur joined #gluster
19:27 vbellur1 joined #gluster
19:28 vbellur joined #gluster
19:28 vbellur1 joined #gluster
19:29 vbellur joined #gluster
19:30 vbellur1 joined #gluster
19:31 vbellur joined #gluster
19:31 vbellur1 joined #gluster
19:32 vbellur joined #gluster
19:32 major joined #gluster
19:33 vbellur1 joined #gluster
19:34 vbellur joined #gluster
19:34 vbellur1 joined #gluster
19:35 vbellur joined #gluster
19:36 vbellur1 joined #gluster
19:37 vbellur joined #gluster
19:37 vbellur1 joined #gluster
19:38 vbellur joined #gluster
19:39 vbellur joined #gluster
19:39 baber_ joined #gluster
19:40 vbellur1 joined #gluster
19:40 vbellur joined #gluster
19:41 vbellur1 joined #gluster
19:42 vbellur joined #gluster
19:42 vbellur1 joined #gluster
19:43 vbellur joined #gluster
19:45 msvbhat joined #gluster
19:45 Humble garbageyard++
19:45 glusterbot Humble: garbageyard's karma is now 1
20:11 rastar_dinner joined #gluster
20:18 mbrandeis joined #gluster
20:21 PatNarciso joined #gluster
20:33 bigpic left #gluster
20:42 shyam joined #gluster
21:01 zcourts_ joined #gluster
21:05 bwerthmann joined #gluster
21:06 saali joined #gluster
21:48 hmamtora joined #gluster
21:49 hmamtora Folks, I am looking to upgrade upgrade gluster from version 3.5 to version 3.8
21:50 hmamtora But in an online upgrade mode, if anybody has already done this successfully then can you please share the process with me, thanks....
22:06 hmamtora joined #gluster
22:07 hmamtora left #gluster
22:08 hmamtora joined #gluster
22:10 shyam joined #gluster
22:39 shyam joined #gluster
22:41 JoeJulian hmamtora: Depends on which 3.5 version, if I'm remembering correctly. There were some early versions of 3.5 without the needed api pieces to make the transition without downtime.
22:41 JoeJulian That's one transition I would try to schedule offline if I could.
22:47 hmamtora I did try offline and that works
22:47 hmamtora my gfs 3.5 version is 3.5.3
22:50 JoeJulian Yes, I'm certain offline would work. I'm just not sure that a running 3.5.<6 client can connect to a greater server. If you tested that and it works, awesome. I know I had a ton of trouble with going from somewhere around there to 3.6.something way back when.
22:50 JoeJulian But yeah, if you can have down time, that upgrade path should be a cinch.
22:51 hmamtora i upgraded gluster to 3.8.13 and it worked with my clients still being on 3.5.3
23:02 JoeJulian perfect, that's the only issue that I knew of.
23:05 hmamtora I am looking to upgrade the gluster server without a downtime, via online upgrade :)
23:16 hmamtora joined #gluster
23:29 plarsen joined #gluster
23:32 omie888777 joined #gluster
23:48 JoeJulian_ joined #gluster
23:48 amye_ joined #gluster
23:49 ws2k3 joined #gluster
23:49 valkyr3e joined #gluster
23:49 ndevos joined #gluster
23:49 ndevos joined #gluster
23:50 mlhess joined #gluster
23:51 dataio joined #gluster
23:52 swebb joined #gluster
23:53 decayofmind joined #gluster
23:54 shortdudey123 joined #gluster
23:55 portdirect joined #gluster
23:59 Jacob843 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary