Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 haomaiwa_ joined #gluster
00:15 TheCthulhu3 joined #gluster
00:15 nangthang joined #gluster
00:41 davidself joined #gluster
01:18 harish joined #gluster
01:37 Lee1092 joined #gluster
01:43 necrogami joined #gluster
01:50 harish joined #gluster
01:56 nangthang joined #gluster
02:16 dtrainor joined #gluster
02:35 ron-slc joined #gluster
02:36 ron-slc joined #gluster
02:38 maveric_amitc_ joined #gluster
02:50 DV__ joined #gluster
03:09 TheSeven joined #gluster
03:11 lyang0 joined #gluster
03:16 bharata-rao joined #gluster
03:20 DV joined #gluster
03:31 schandra joined #gluster
03:45 kanagaraj joined #gluster
03:47 itisravi joined #gluster
03:47 RameshN joined #gluster
03:48 nishanth joined #gluster
03:50 nbalacha joined #gluster
04:04 kshlm joined #gluster
04:04 haomaiwa_ joined #gluster
04:13 ppai joined #gluster
04:14 shubhendu joined #gluster
04:17 yazhini joined #gluster
04:24 kotreshhr joined #gluster
04:26 vmallika joined #gluster
04:28 rafi joined #gluster
04:34 DV joined #gluster
04:35 atinm joined #gluster
04:42 jwd joined #gluster
04:45 jwaibel joined #gluster
04:49 ramteid joined #gluster
05:00 ninkotech__ joined #gluster
05:01 vikumar joined #gluster
05:04 ninkotech__ joined #gluster
05:05 jiffin joined #gluster
05:08 gem joined #gluster
05:08 DV__ joined #gluster
05:08 vikumar joined #gluster
05:09 lalatenduM joined #gluster
05:09 ndarshan joined #gluster
05:16 pppp joined #gluster
05:21 vikumar joined #gluster
05:22 hagarth joined #gluster
05:28 vikumar joined #gluster
05:32 Bhaskarakiran joined #gluster
05:34 PaulCuzner joined #gluster
05:41 hchiramm joined #gluster
05:42 anil joined #gluster
05:42 ninkotech joined #gluster
05:45 ninkotech joined #gluster
05:45 atalur joined #gluster
05:47 deepakcs joined #gluster
05:47 ashiq joined #gluster
05:48 Manikandan joined #gluster
05:49 dusmant joined #gluster
05:51 aravindavk joined #gluster
06:04 jwd joined #gluster
06:11 kovshenin joined #gluster
06:18 ron-slc_ joined #gluster
06:18 jtux joined #gluster
06:19 ron-slc2 joined #gluster
06:22 hgowtham joined #gluster
06:26 skoduri joined #gluster
06:27 merlink joined #gluster
06:28 glusterbot News from newglusterbugs: [Bug 1245981] forgotten inodes are not being signed <https://bugzilla.redhat.com/show_bug.cgi?id=1245981>
06:29 meghanam joined #gluster
06:31 skoduri joined #gluster
06:32 jiffin1 joined #gluster
06:32 raghu joined #gluster
06:34 kotreshhr joined #gluster
06:35 maveric_amitc_ joined #gluster
06:36 aravindavk joined #gluster
06:49 dusmant joined #gluster
06:53 ramky joined #gluster
07:02 mbukatov joined #gluster
07:03 KennethDejonghe joined #gluster
07:06 aravindavk joined #gluster
07:09 ppai joined #gluster
07:18 ctria joined #gluster
07:19 spalai joined #gluster
07:20 kotreshhr joined #gluster
07:37 gem joined #gluster
07:40 overclk joined #gluster
07:41 shubhendu joined #gluster
07:44 shubhendu_ joined #gluster
07:47 ctria joined #gluster
07:53 arcolife joined #gluster
08:00 Philambdo joined #gluster
08:00 nangthang joined #gluster
08:14 skoduri joined #gluster
08:17 ajames41678 joined #gluster
08:17 ajames-41678 joined #gluster
08:21 itisravi joined #gluster
08:26 overclk joined #gluster
08:41 ctria joined #gluster
08:43 vmallika joined #gluster
08:44 ramky joined #gluster
08:50 jiffin1 joined #gluster
08:56 deniszh joined #gluster
08:57 kenansulayman joined #gluster
08:59 Guest36380 joined #gluster
09:03 skoduri joined #gluster
09:06 dusmant joined #gluster
09:08 overclk joined #gluster
09:08 rjoseph joined #gluster
09:08 kaushal_ joined #gluster
09:13 Leildin joined #gluster
09:15 PaulCuzner joined #gluster
09:16 TvL2386 joined #gluster
09:25 LebedevRI joined #gluster
09:34 PaulCuzner joined #gluster
09:38 lchabert joined #gluster
09:38 lchabert Hello gluster team !
09:39 lchabert i'm making some test and i have some trouble concerning : --volfile-server parameter
09:39 csim hi
09:39 glusterbot csim: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:40 * csim throw a stuffed green android on glusterbot
09:40 lchabert i have a mount point with this option as "fallback server"
09:40 lchabert and when i shutdown my primary host, my mount point has been "freezed"
09:41 lchabert any idea concerning this problem ? I'm using glusterfs v3.7.1
09:41 lchabert 3.7.2
09:42 lchabert no redondancy for my mount point
09:47 RedW joined #gluster
09:49 dusmant joined #gluster
09:49 PaulCuzner joined #gluster
09:51 poornimag lchabert, --volfile-server will be the host from which the volfile for the client process will be fetched. You could have multiple volfile servers set for a client, so it serves as backup servers.
09:52 poornimag lchalbert, do you have multiple volfile severs for your mount point?
09:52 lchabert yes, but it does not work for me, my mount point does not respond if i shutdown one glusterfs server
09:53 lchabert "/usr/sbin/glusterfs --volfile-server=gfs02 --volfile-server=gfs03 --volfile-id=/gdata /mnt/volumes"
09:53 lchabert and when i shut gfs02, gfs03 has not been used
09:53 poornimag do the bricks reside on the server you shutdown?
09:54 lchabert yes the brick exist on both servers
09:54 poornimag what is the volume configuration?
09:54 poornimag i mean replicate or distribute?
09:54 lchabert my setup: 3 servers, with replica=3
09:55 lchabert only replicate, no distribute
09:55 lchabert 3 servers, same config
09:55 lchabert when i wrote something on gfs02, data has been replicated on gfs03
09:56 poornimag one small test, could yuo just stop glusterd service and not shutdown the server and see if the mount point still hangs?
09:56 lchabert ok
09:57 poornimag in either case, the mount point should not hang, please raise a bz, with client and glusterd logs collected from all the nodes?
09:59 lchabert if i stop glusterfsd (systemctl stop glusterfsd), mount point has not been freezed
09:59 poornimag oh, i meant glusterd?
10:00 ajames-41678 joined #gluster
10:02 lchabert sorry, wrong deamon stoped
10:02 lchabert "systemctl stop glusterd" and mount point not freezed
10:03 skoduri joined #gluster
10:03 lchabert let me see for logs
10:04 lchabert which file do you need ?
10:08 lchabert (one process is still running on gfs02: glusterfs -s localhost --volfile-id gluster/glustershd, is it normal ?)
10:09 Manikandan joined #gluster
10:10 kotreshhr joined #gluster
10:11 kotreshhr joined #gluster
10:16 kshlm joined #gluster
10:19 twisted`_ joined #gluster
10:20 arcolife joined #gluster
10:20 Lee1092_ joined #gluster
10:21 and` joined #gluster
10:21 and` joined #gluster
10:22 hchiramm joined #gluster
10:23 meghanam joined #gluster
10:23 vikumar joined #gluster
10:23 Manikandan joined #gluster
10:24 vmallika joined #gluster
10:24 ppai joined #gluster
10:27 twisted` joined #gluster
10:32 gem joined #gluster
10:32 smohan joined #gluster
10:32 Arrfab joined #gluster
10:37 RedW joined #gluster
10:37 ramky joined #gluster
10:44 foster joined #gluster
10:46 kayn joined #gluster
10:47 kayn Hi guys, I would like to ask if it's possible to add an arbiter to the running volume with 2 nodes?
10:49 sankarshan_ joined #gluster
10:49 kdhananjay kayn: itisravi should be able to answer that.
10:49 kdhananjay itisravi: ^^
10:50 itisravi kayn: support for that is not yet included. you would have to create a new arbiter volume I'm afraid.
10:51 kayn itisravi: ok, thanks
10:57 dusmant joined #gluster
10:58 jcastill1 joined #gluster
11:03 jcastillo joined #gluster
11:13 ajames41678 joined #gluster
11:13 ira joined #gluster
11:13 firemanxbr joined #gluster
11:17 harish joined #gluster
11:18 tdasilva joined #gluster
11:19 skoduri joined #gluster
11:19 samikshan joined #gluster
11:20 ppai joined #gluster
11:21 kotreshhr joined #gluster
11:23 DV__ joined #gluster
11:24 ashiq joined #gluster
11:24 Philambdo joined #gluster
11:30 nishanth joined #gluster
11:42 soumya joined #gluster
11:49 Manikandan joined #gluster
11:51 skoduri joined #gluster
11:52 maveric_amitc_ joined #gluster
12:06 HeresJohny joined #gluster
12:10 jtux joined #gluster
12:11 Slashman joined #gluster
12:11 overclk joined #gluster
12:11 ashiq joined #gluster
12:18 ctria joined #gluster
12:19 ppai joined #gluster
12:21 RedW joined #gluster
12:25 kanagaraj joined #gluster
12:25 jon__ joined #gluster
12:26 masterzen_ joined #gluster
12:26 lchabert poornimag - any log files needed ?
12:27 poornimag lchabert, yes client log files, glusterd log files
12:29 lchabert for client log file, can i change verbosity ? Any options to do this ?
12:29 kampnerj joined #gluster
12:30 poornimag gluster vol set <volname> diagnostics.client-log-level TRACE
12:33 lchabert and where has been stored logs ?
12:35 poornimag /var/log/glusterfs - unless you have configured otherwise
12:35 masterzen joined #gluster
12:35 lchabert ok, let me check
12:37 hagarth joined #gluster
12:40 perpetualrabbit joined #gluster
12:40 spalai joined #gluster
12:40 aaronott joined #gluster
12:42 lchabert poomimag - client side log (with mount options): http://expirebox.com/download/0bcd6ffa09dff280fbfb46121c652416.html
12:43 lchabert and glusterd properly stopped (no freeze)
12:45 lchabert and this one, with server "suspended", whithout glusterd properly stoped: http://expirebox.com/download/40e845baef495025e24959dbe94430dd.html
12:48 poornimag oh have you enabled auth-allow-insecure option?
12:49 DV__ joined #gluster
12:50 poornimag gluster vol set <volname> rpc-auth-allow-insecure on
12:51 ndevos isnt rpc-auth-allow-insecure an option to put in the glusterd.vol file?
12:52 ndevos server.allow-insecure is the volume set option
12:52 s19n joined #gluster
12:55 poornimag ndevos, my bad
12:57 shyam joined #gluster
12:57 poornimag lchabert, gluster vol set <volname> server.allow-insecure on, and option rpc-auth-allow-insecure on in /etc/glusterfs/glusterd.vol
12:58 lchabert non theses options has not been set on gluster server
12:58 dusmant joined #gluster
12:58 poornimag could you please try these
12:59 ppai joined #gluster
12:59 poornimag this will allow the clients to communicate using insecure ports(>1024)
12:59 aaronott joined #gluster
13:00 cleong joined #gluster
13:01 overclk joined #gluster
13:05 julim joined #gluster
13:08 bennyturns joined #gluster
13:10 gospod joined #gluster
13:10 gospod hello everyone. can gluster be deployed on a NFS mount or just a local mount?
13:11 ndevos gospod: if you mean creating a gluster volume with storage on an NFS mount, then "no"
13:12 gospod ok
13:12 ndevos gospod: gluster uses extended attributes and NFS does not support those
13:12 gospod yeah
13:12 gospod another question... because im about to deploy gluster in a KVM guest
13:12 gospod is LACP of any use to gluster and on which level should it be deployed, on host or guest?
13:13 ndevos LACP is bonding, right?
13:13 gospod yeah
13:14 mpietersen joined #gluster
13:14 ndevos it can be used, but it is more of an OS feature than a Gluster one
13:14 gospod yeah, would gluster benefit from 2x 1gbs nics to achieve 2gbs when needed? and also on host or guest?
13:16 ndevos gluster can use it just fine, you would probably need more connections to the gluster server to really benefit from it though
13:16 ndevos so, more clients, and not a 1:1 relation
13:16 gospod between gluster nodes
13:16 gospod do they use multi connections or single connections?
13:17 ndevos the gluster servers do not really communicate much to eachother, unless you mount over NFS
13:17 ndevos ... or samba
13:18 gospod when I add another node to the gluster
13:18 gospod it needs to "fill up", correct?
13:18 ndevos yes, that is called "rebalance"
13:18 gospod will it fill up with 2gb/s or 1gb/s if configured with bonding?
13:19 gospod in my understanding it should rebalance with 2gb/s...
13:19 gospod it is communicating with several nodes to rebalance at once, correct?
13:20 ndevos I think rebalance only uses a single connection, so most likely 1gb/s unless you have a bonding mode that can split/merge tcp-streams
13:20 ndevos it will talk to different bricks (storage servers), but I think rebalance is single threaded, so only one brick at the time
13:20 gospod how can it rebalance with single connection if it would be set up with 3 copies?
13:21 ndevos ah. right, in that case, it would connect to 3 bricks at the time and read/write data to them
13:21 gospod what would be better to set up, nic bonding on host or guest?
13:22 ndevos so, the system that does the rebalance would benefit from bonding, but the receiving servers not so much
13:22 ndevos you probably need it on the host and the switches, I dont know how a guest could otherwise benefit from it
13:23 gospod oke
13:23 gospod thanks ndevos !
13:23 ndevos you're welcome gospod :)
13:23 gospod last mini question would be...
13:23 gospod zfs on linux + glusterfs = good combination or?
13:23 gospod absolutely need self healing against bitrot :)
13:24 ndevos some people use that, there is a page with some hints
13:24 gospod would you recommend any better underlying FS?
13:24 ndevos http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Gluster%20On%20ZFS/
13:24 Sjors joined #gluster
13:25 gospod i was thinking of deploying ZFS on FreeBSD but gluster isnt support on BSD right or on a NFS mount which we already debated :)
13:25 ndevos XFS is the best tested and the general recommendation, you would need to use glusterfs-3.7 to have bitrot functionality
13:26 ndevos gluster used to work on FreeBSD, but we have had only few people reporting about that lately, not sure how stable it is
13:26 l0uis gospod: fwiw, we use bonding on our gluster cluster and it works great.
13:26 gospod l0uis: what mode?
13:26 l0uis gospod: 5 gluster servers
13:26 l0uis i forget. let me check
13:26 gospod yes please :)
13:26 l0uis balance-alb
13:27 gospod l0uis: also what would you recommend if gluster was a KVM guest, deploy bonding on host or guest?
13:27 gospod thanks !
13:27 gospod alot really
13:27 ctria joined #gluster
13:27 l0uis i dont do kvm or virtual machine stuff, so i dont know. sorry :)
13:27 l0uis we use gluster primarily as a shared fs for a compute cluster.
13:27 gospod yeah hehe, here it will be a little reverse =P
13:28 gospod alot of non critical compute nodes will get gluster guests beside them
13:30 theron joined #gluster
13:30 glusterbot News from newglusterbugs: [Bug 1247152] SSL improvements: ECDH, DH, CRL, and accessible options <https://bugzilla.redhat.com/show_bug.cgi?id=1247152>
13:30 glusterbot News from newglusterbugs: [Bug 1247153] SSL improvements: ECDH, DH, CRL, and accessible options <https://bugzilla.redhat.com/show_bug.cgi?id=1247153>
13:31 gospod l0uis: mode6 needs switch configuration or?
13:33 gospod anyone knows if freebsd + linux jail with gluster would work? ndevos?
13:34 ndevos no idea, sorry
13:34 ndevos I do not think we have any special jail "enablement" features, if they are not needed, I guess it should just work?
13:37 lchabert poornimag - with rpc* parameter, same comportement, freeze of my mount point
13:37 jcastill1 joined #gluster
13:38 B21956 joined #gluster
13:40 SOLDIERz joined #gluster
13:42 jcastillo joined #gluster
13:45 ppai joined #gluster
13:45 dgandhi joined #gluster
13:45 theron joined #gluster
13:48 rwheeler joined #gluster
13:48 overclk joined #gluster
13:48 s19n Hello, is the "cluster.background-self-heal-count" option per-brick? I mean, should I make it lower on a host which has 4 bricks on itself?
13:50 ndevos I dont know, thats something the dht guys should be able to answer, but I dont see them online atm
13:50 ndevos maybe send your question to gluster-users@gluster.org?
13:52 s19n ndevos, thanks, I've already one question "in the queue" on the list (sent an hour ago), and looking for an answer I found that option
13:57 julim_ joined #gluster
13:57 cyberswat joined #gluster
14:00 glusterbot News from newglusterbugs: [Bug 1126831] Memory leak in GlusterFs client <https://bugzilla.redhat.com/show_bug.cgi?id=1126831>
14:01 paraenggu joined #gluster
14:02 paraenggu left #gluster
14:06 DV joined #gluster
14:13 chirino joined #gluster
14:15 gem joined #gluster
14:19 arcolife joined #gluster
14:19 bennyturns joined #gluster
14:19 jcastill1 joined #gluster
14:21 RameshN joined #gluster
14:22 spalai left #gluster
14:23 kshlm joined #gluster
14:24 jcastillo joined #gluster
14:25 overclk joined #gluster
14:26 togdon joined #gluster
14:28 nbalacha joined #gluster
14:31 kotreshhr joined #gluster
14:33 aaronott joined #gluster
14:35 jbrooks joined #gluster
14:39 kayn_ joined #gluster
14:39 mpietersen joined #gluster
14:45 necrogami joined #gluster
14:48 jiffin joined #gluster
14:49 skoduri joined #gluster
14:54 plarsen joined #gluster
14:54 kayn__ joined #gluster
14:58 jobewan joined #gluster
15:00 glusterbot News from newglusterbugs: [Bug 1247221] glusterd dies with OOM after a simple find executed on the volume <https://bugzilla.redhat.com/show_bug.cgi?id=1247221>
15:05 ekman left #gluster
15:05 calisto joined #gluster
15:05 wushudoin joined #gluster
15:10 kbyrne joined #gluster
15:10 kovshenin joined #gluster
15:12 _Bryan_ joined #gluster
15:14 hagarth joined #gluster
15:14 kovsheni_ joined #gluster
15:18 cholcombe joined #gluster
15:23 kovshenin joined #gluster
15:26 overclk joined #gluster
15:34 Leildin joined #gluster
15:35 _maserati joined #gluster
15:40 sage__ joined #gluster
15:41 yosafbridge joined #gluster
15:48 kotreshhr joined #gluster
16:05 Gill joined #gluster
16:06 JoeJulian s19n: no. It's per-client as the client does the self-heal (glustershd is a special client that runs as a daemon on the server, but it's still a client).
16:07 JoeJulian cc ndevos
16:08 nsoffer joined #gluster
16:08 s19n JoeJulian: so, in a server with 4 bricks, I would have up to 16x4 background selfheal operations?
16:10 JoeJulian I'd like to say yes, but one thing I'm not sure of is whether multiple self-heal daemons operate simultaneously. I can think of good and bad reasons for both cases.
16:10 JoeJulian Mostly what it means is that when your client tries to open 17 files, it'll hang on the 17th waiting for a self-heal.
16:10 JoeJulian (assuming all 17 need healed)
16:11 JoeJulian I still hate that behavior.
16:12 s19n I see; while increasing the replica number (in my case from 2 to 3)... which self-heal daemon has the responsibility to create data on the third brick of every set? Where will it run?
16:13 anrao joined #gluster
16:13 s19n to say, I see high load on the first two peers, but not on the third, which seems to just sit there waiting for data
16:15 JoeJulian Oh, well that's possibly easier to understand. Since it's a whole new brick, you might change the self-heal algorithm to full. I suspect it's trying to diff blocks and only copy blocks that differ (which, of course, is all of them).
16:15 JoeJulian But that calculation I suspect is where the load's coming from.
16:15 rafi joined #gluster
16:15 togdon joined #gluster
16:16 s19n shouldn't the 'reset' algo skip the diff on empty files?
16:17 JoeJulian I haven't read through that in a while. Are you looking at code that you can point me at?
16:18 s19n I'm pretty sure I've read it in the documentation
16:18 s19n http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options --uh, maybe outdated?
16:19 s19n Reset uses a heuristic model. If the file does not exist on one of the subvolumes, or a zero-byte  file exists (created by entry self-heal) the entire content has to be  copied anyway, so there is no benefit from using the "diff" algorithm.
16:19 JoeJulian That would be logical.
16:20 s19n unfortunately am in a hurry now; will try your suggestion anyway. I also sent a mail to the (users) list on the topic. Thanks!
16:24 JoeJulian Hmm, ok... that's unexpected.
16:25 calavera joined #gluster
16:29 JoeJulian If the self_heal_algorithm is reset, it'll do a full unless the sink exists with a non-zero size in which case it'll do a diff. If, however, you have specified either full or diff algorithms, it will always use the algorithm specified.
16:31 Leildin joined #gluster
16:36 PatNarciso joined #gluster
16:47 Rapture joined #gluster
16:52 phoenixstew joined #gluster
16:53 captainflannel joined #gluster
16:59 captainflannel had a weird issue on our gluster volume, we noticed the first server was running out of disk space due to gluster log files
17:00 captainflannel countless errors about about client-X disconnected, client process will keep trying to connect to glusterd until brick's port is available
17:00 captainflannel the volume was online and accessible, and all the bricks showed as online in volume status
17:01 captainflannel stopping the volume, restarting the hosts, seemed to resolve.  we were running 3.5.3 and upgraded to 3.5.5 since the volume was offline
17:02 overclk joined #gluster
17:05 theron_ joined #gluster
17:06 smohan joined #gluster
17:09 jcastill1 joined #gluster
17:14 jcastillo joined #gluster
17:17 kayn__ joined #gluster
17:19 shaunm_ joined #gluster
17:31 glusterbot News from newglusterbugs: [Bug 1247274] Unable to change the auth.allow setting on volumes due to spurious client version issue <https://bugzilla.redhat.com/show_bug.cgi?id=1247274>
17:31 jwd joined #gluster
17:48 Gill_ joined #gluster
17:49 rafi joined #gluster
17:54 Gill joined #gluster
18:03 B21956 joined #gluster
18:07 smohan joined #gluster
18:08 vimal joined #gluster
18:25 lchabert joined #gluster
18:30 aaronott joined #gluster
18:38 nsoffer joined #gluster
18:39 aaronott1 joined #gluster
18:43 s19n joined #gluster
18:43 kayn__ joined #gluster
18:46 nsoffer joined #gluster
18:47 ron-slc joined #gluster
18:50 aaronott joined #gluster
18:56 pppp joined #gluster
18:58 merlink joined #gluster
19:05 jiffin joined #gluster
19:06 s19n I'm back
19:06 s19n JoeJulian: you said "If, however, you have specified either full or diff algorithms, it will always use the algorithm specified."
19:06 s19n I did not set that option, so it should still be at the default value, i.e. "reset"
19:20 jobewan joined #gluster
19:28 chirino joined #gluster
19:31 coredump joined #gluster
19:39 shaunm_ joined #gluster
19:43 Slashman joined #gluster
19:48 B21956 joined #gluster
19:55 ron-slc joined #gluster
19:55 aaronott1 joined #gluster
19:56 ron-slc_ joined #gluster
20:00 mpietersen joined #gluster
20:02 ron-slc joined #gluster
20:04 nsoffer joined #gluster
20:05 togdon joined #gluster
20:09 ron-slc joined #gluster
20:10 Gill joined #gluster
20:18 kaushal_ joined #gluster
20:18 kaushal_ joined #gluster
20:25 ron-slc joined #gluster
20:31 mpietersen joined #gluster
20:35 mpietersen joined #gluster
20:42 PaulCuzner joined #gluster
20:45 rotbeard joined #gluster
20:54 sc0001 joined #gluster
20:55 chirino joined #gluster
20:59 dgandhi joined #gluster
21:07 uebera|| joined #gluster
21:08 cleong joined #gluster
21:12 beeradb joined #gluster
21:13 badone joined #gluster
21:17 cyberswat joined #gluster
21:20 B21956 left #gluster
21:23 smohan joined #gluster
21:35 _maserati My production gluster is currently sitting on 3.6.1, should I upgrade to the latest 3.6.x ? Any reason for moving to 3.7 yet?
21:46 julim joined #gluster
21:57 dijuremo joined #gluster
21:58 shyam joined #gluster
21:59 dijuremo Hi guys, got a quick question on gluster upgrade procedure... If I am on 3.6.4, is it straightforward to upgrade to 3.7.x by first stopping gluster on one node, upgrading, letting it sync, then bringing down the other node and upgrading that one?
22:03 dijuremo I am not running quota nor georeplication
22:04 calavera_ joined #gluster
22:19 nishanth joined #gluster
22:24 smohan joined #gluster
22:33 sc0001 joined #gluster
22:40 calisto joined #gluster
22:40 sc0001_ joined #gluster
22:45 JPaul joined #gluster
23:02 shyam joined #gluster
23:05 calavera joined #gluster
23:07 sc0001 joined #gluster
23:10 DV joined #gluster
23:10 coredump joined #gluster
23:22 calavera joined #gluster
23:29 theron joined #gluster
23:29 calisto joined #gluster
23:41 aaronott joined #gluster
23:43 Romeor wazap?
23:43 ron-slc_ joined #gluster
23:48 sc0001_ joined #gluster
23:52 ron-slc2 joined #gluster
23:58 dijuremo Hey Romeor

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary