Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 major so on Ubuntu14 you have to manually specify all of those to their "new" defaults in order to build a btrfs filesystem that matches what would be created on Ubuntu16.. as an example
00:00 JoeJulian There are so many variables... the table to address them all would be huge.
00:01 major yah .. maybe
00:01 JoeJulian I imagine network packet size, latency, drive speed, seek speed, memory, workload, etc.
00:02 major heh
00:03 major well .. I get to start testing gluster on btr on the new nodes this weekend
00:04 JoeJulian +1
00:04 major can't wait really
00:04 JoeJulian I'm a little jealous.
00:04 JoeJulian Those look like some fun toys to play with.
00:04 cyberbootje2 quick question, mtu 9K ? gluster doesn't have a problem with that?
00:04 JoeJulian I've never had a problem with 9k frames.
00:08 major more notes to take
00:09 major I am generally looking at spinning this up with 2 nodes initially and then going through the exercise of adding bricks to existing volumes and generally seeing what sort of hellish corners I can get into
00:09 major and hopefully get the first portion of this snapshot code done before the weekend is over
00:11 cloph_away joined #gluster
00:12 major hmm
00:12 major I should be working against master I hope
00:13 ws2k3 joined #gluster
00:14 cyberbootje2 kvm is opening and closing gluster conenctions per VM, is that ok? or should i try to use something else that's persistent...
00:14 ankitr joined #gluster
00:20 major okay .. yah .. my reference branch should be master .. I am good
00:20 major well .. I am good sometimes .. maybe .. when asked politely
00:27 major okay .. gluster_take_snapshot() done and the duplicate crap gutted
00:27 major though .. I dunno if the subtle differences between the two instances in the error message are useful to try to retain
00:35 cyberbootje2 and my server is frozen..
00:35 cyberbootje2 kernel panic
00:35 major should have taken the blue pill
00:37 cyberbootje2 but my vm is in RO now, tought it would failover:S
00:37 major thought you only had 1 brick
00:38 cyberbootje2 yeah i reinstalled everything and have now replicated
00:39 major ahhhh
00:40 major you work quick
00:40 cyberbootje2 haha
00:43 major hurm .. a portion of this is gonna be a lot more convoluted than I expected
00:44 major btr does a lot of stuff automagically that glusterd is manually handling
00:44 major the whole code-flow is .. off
00:48 major damn .. snap_brick_create() is in the way as well >.<
00:59 major JoeJulian, do you have any advice on where to point the destdir for the snapshot?
01:00 major it looks like gluster tries to mount LVM snapshots as a separate step, and off in /run/gluster/snaps/ .. but w/ btr you have to pick that destination when the snapshot is created
01:00 major and I am fairly certain it can't be pointed at /run/ ..
01:01 JoeJulian Right, it would have to be on the filesystem.
01:02 major I suppose if running on btr then /run/gluster/snaps could just be in and of itself a mount from btr
01:02 JoeJulian I would put them off the root of the filesystem. The brick is required to be a subdirectory.
01:02 JoeJulian True
01:02 major is there a doc that gives a clear picture of the directory structure?
01:03 major I think I can perform the core of the snap_brick_create() w/in take_btrfs_snapshot() and deal with the snap/ path there
01:04 major and hey .. if it explodes a horrible death it will happen on a completely new and unimportant filesystem ;)
01:07 shdeng joined #gluster
01:09 moneylotion joined #gluster
01:10 JoeJulian I think you're right, but I've never gotten that deep in to it.
01:11 major trying to find docs for the core of the layout in /run/gluster/ and w/in the backend filesystems themselves
01:12 JoeJulian On the rare occasion something is mounted in /run/gluster, I think it's immediately unmounted. If it was up to me, nothing would ever be mounted by the server. Use gfapi.
01:13 major not even the snaps?
01:13 ahino joined #gluster
01:16 major ...
01:17 major well .. like this document ... https://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
01:17 glusterbot Title: What is this new .glusterfs directory in 3.3? (at joejulian.name)
01:19 JoeJulian Gah... damn. I'd like to never have the server perform a mount. That way I could run it in a container without giving it that privilege.
01:19 major hmm
01:20 major well .. that should be feasible with btr given enough permissions to do a snap
01:20 major just need to figure out a sane location to land all the snaps so that it can be consistent regardless of the backend .. unless its fine to add stacks of if checks for different snap paths based on the underlying fs .. though .. that feels kinda clunky
01:20 JoeJulian But then how do you access the snapshot from a client. :(
01:21 major well .. like this post about the .glusterfs/ directory that you wrote
01:21 major not that the post directly deals with what I was thinking of .. more that I expected a .snapshots/ path to be mounted near the brick path
01:21 major or something similar
01:21 major or be configurable maybe?
01:22 JoeJulian Oh, right, you don't have to mount anything if you snapshot it to the same filesystem outside the brick path.
01:22 major /data/brick1/vol1/volume/ //data/brick1/vol1/snaps/
01:23 JoeJulian yeah
01:23 major then it all "just works" I think
01:23 major think .. yah .. that explains the burnt toast smell
01:23 JoeJulian Careful. Don't let the magic smoke out. If that happens, everything quits working.
01:24 major feels like that would work for the LVM side of things to avoid mounting into /run/gluster/ as well
01:24 JoeJulian Well lvm has to mount because it's a block device.
01:24 major but .. I don't understand that portion of the code well enough to understand "why" the snaps are out there .. and what other impact arises from that
01:25 major because .. honestly .. I think I would have had that sort of directory structure be implicitly created by cluster when intialzing a brick
01:25 major I also think I could take a moment to learn to spell
01:25 major or type
01:26 JoeJulian Well, you have legacy.
01:26 major like .. I can manually construct this structure and make it sort of assumed w/in the btr code ..
01:26 major just .. still feels clunky
01:27 JoeJulian Aight... heading home. ttfn
01:27 major stay safe
02:21 nishanth joined #gluster
02:40 d0nn1e joined #gluster
02:46 ankitr joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:53 derjohn_mob joined #gluster
02:53 major anyway .. for btrfs the snap_brick_create() generally has to be part of the take_btrfs_snapshot() ..
02:53 major just .. where to mount it
03:06 major maybe I can just put it into .snapshots/ in the top of the volume/
03:06 major thought hat generally feels like a scary idea
03:10 plarsen joined #gluster
03:10 major okay .. it is only a little bit scary ...
03:36 buvanesh_kumar joined #gluster
03:42 dominicpg joined #gluster
03:42 buvanesh_kumar_ joined #gluster
03:49 nbalacha joined #gluster
03:53 atinm joined #gluster
03:55 itisravi joined #gluster
04:08 oajs_ joined #gluster
04:10 kramdoss_ joined #gluster
04:16 sbulage joined #gluster
04:28 ankitr joined #gluster
04:43 Prasad joined #gluster
04:43 major la sigh .. okay .. soo .. at kinda looks more and more like .. as an xlator .. one can sort of do their own thing ..
04:50 Shu6h3ndu joined #gluster
04:51 jiffin joined #gluster
05:05 rafi1 joined #gluster
05:08 BitByteNybble110 joined #gluster
05:08 major I think one of the really frustrating aspects of this whole process is that most of the LVM code is operating on /dev/ devices, and the btrfs stuf just needs the filesystem
05:08 ankitr joined #gluster
05:08 kdhananjay joined #gluster
05:12 ksandha_ joined #gluster
05:13 apandey joined #gluster
05:16 jiffin1 joined #gluster
05:20 ndarshan joined #gluster
05:24 RameshN joined #gluster
05:24 prasanth joined #gluster
05:30 rafi1 joined #gluster
05:34 ppai joined #gluster
05:36 rafi1 joined #gluster
05:40 kotreshhr joined #gluster
05:43 riyas joined #gluster
05:43 Saravanakmr joined #gluster
05:48 skumar joined #gluster
05:51 aravindavk joined #gluster
05:52 atinm joined #gluster
05:54 rjoseph joined #gluster
05:59 itisravi_ joined #gluster
06:01 sanoj joined #gluster
06:05 ankitr_ joined #gluster
06:05 RameshN_ joined #gluster
06:08 susant joined #gluster
06:10 msvbhat joined #gluster
06:10 hgowtham joined #gluster
06:17 rjoseph joined #gluster
06:18 ankitr joined #gluster
06:19 karthik_us joined #gluster
06:26 saintpablos joined #gluster
06:26 Humble joined #gluster
06:28 buvanesh_kumar_ joined #gluster
06:32 susant joined #gluster
06:38 prasanth joined #gluster
06:45 chris349 joined #gluster
06:48 mhulsman joined #gluster
06:49 sbulage joined #gluster
06:52 ksandha_ joined #gluster
06:54 rastar joined #gluster
06:55 rastar joined #gluster
06:58 jkroon joined #gluster
07:01 buvanesh_kumar joined #gluster
07:10 skoduri joined #gluster
07:11 rjoseph joined #gluster
07:14 kotreshhr joined #gluster
07:21 RameshN joined #gluster
07:21 jtux joined #gluster
07:27 derjohn_mob joined #gluster
07:29 [diablo] joined #gluster
07:31 sona joined #gluster
07:31 itisravi_ joined #gluster
07:33 itisravi joined #gluster
07:33 rafi1 joined #gluster
07:36 Philambdo joined #gluster
07:37 msvbhat joined #gluster
07:37 Vytas_ joined #gluster
07:41 mbukatov joined #gluster
07:45 Go|Kule joined #gluster
07:48 ivan_rossi joined #gluster
07:49 Go|Kule Hello, I have problem with newest version of gluster 3.10.0 on CentOS 7.2. After compiling gluster when you try to run command systemctl start glusterd and restart there is a message "Failed to start glusterd.service: Unit not found." Commands status, and stop are working. Any suggestion?
07:50 kshlm Go|Kule, Could you check the logs. Both the journalctl logs with "journalctl -u glusterd" and the glusterd logfile at /var/log/glusterfs/glusterd.log
07:53 itisravi joined #gluster
08:04 jwd joined #gluster
08:10 Shu6h3ndu joined #gluster
08:12 nishanth joined #gluster
08:25 armyriad joined #gluster
08:27 fsimonce joined #gluster
08:31 ashiq joined #gluster
08:34 atinm joined #gluster
08:37 armyriad joined #gluster
08:37 pulli joined #gluster
08:43 rastar joined #gluster
08:47 RameshN joined #gluster
08:50 flying joined #gluster
08:51 msvbhat joined #gluster
08:56 GoKule joined #gluster
09:00 GoKule Now I tried version 3.8.9, and I got same problem systemctl start glusterd, and systemctl restart glusterd are not working Failed to start glusterd.service: Unit not found.
09:02 level7_ joined #gluster
09:04 sona joined #gluster
09:07 [fre] joined #gluster
09:09 karthik_us joined #gluster
09:19 kshlm GoKule, from where are you getting the glusterfs packages?
09:19 k4n0 joined #gluster
09:19 RameshN joined #gluster
09:20 kshlm The preferred glusterfs packages for CentOS are provided by the CentOS storage SIG, and these have the required systemd units.
09:26 sona joined #gluster
09:31 ShwethaHP joined #gluster
09:31 GoKule joined #gluster
09:33 bhakti joined #gluster
09:35 GoKule Insted of copy paste, how to make web link and paste it here
09:39 buvanesh_kumar joined #gluster
09:43 vinurs11 joined #gluster
09:43 buvanesh_kumar joined #gluster
09:44 vinurs joined #gluster
09:45 buvanesh_kumar joined #gluster
09:46 GoKule When server restarted systemctl status glusterd gives
09:46 GoKule http://pastebin.com/raw/Y8Lv1w2v
09:46 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
09:49 GoKule Then
09:49 GoKule https://paste.fedoraproject.org/paste/sWnNQ​l09pEp1YfNShO3Cn15M1UNdIGYhyRLivL9gydE=/raw
09:53 GoKule glusterd.log https://paste.fedoraproject.org/paste/0jrNJ​-rYlffmjvhuf72yO15M1UNdIGYhyRLivL9gydE=/raw
10:06 sona joined #gluster
10:17 auzty joined #gluster
10:21 Wizek_ joined #gluster
10:32 Philambdo joined #gluster
10:36 Seth_Karlo joined #gluster
10:37 Seth_Karlo joined #gluster
10:42 msvbhat joined #gluster
10:46 auzty joined #gluster
10:48 Seth_Kar_ joined #gluster
10:50 mhulsman joined #gluster
10:57 arpu joined #gluster
11:02 sanoj joined #gluster
11:17 msvbhat joined #gluster
11:19 rafi joined #gluster
11:30 vinurs joined #gluster
11:31 level7 joined #gluster
11:33 kramdoss_ joined #gluster
11:33 rafi1 joined #gluster
11:37 bfoster joined #gluster
11:39 bluenemo joined #gluster
11:54 skoduri joined #gluster
12:00 R0ok__ joined #gluster
12:09 BatS9 joined #gluster
12:10 BuBU29 joined #gluster
12:26 saintpablo joined #gluster
12:27 skoduri joined #gluster
12:30 amarts joined #gluster
12:31 amarts Hello all :-) good to be back after a long break!!
12:31 amarts how is it going :-)
12:32 rafi amarts: good to see you in gluster mailing list
12:39 jiffin1 joined #gluster
12:46 nh2 joined #gluster
12:51 level7 joined #gluster
12:56 k4n0 joined #gluster
13:09 vbellur1 joined #gluster
13:10 vbellur joined #gluster
13:11 saintpablo joined #gluster
13:12 vbellur joined #gluster
13:13 saintpablos joined #gluster
13:13 baber joined #gluster
13:19 prasanth joined #gluster
13:19 unclemarc joined #gluster
13:21 msvbhat joined #gluster
13:28 rastar joined #gluster
13:33 Saravanakmr joined #gluster
13:33 shyam joined #gluster
13:50 rastar joined #gluster
13:59 kdhananjay joined #gluster
14:00 buvanesh_kumar joined #gluster
14:03 vinurs joined #gluster
14:20 sona joined #gluster
14:21 ivan_rossi left #gluster
14:21 skoduri joined #gluster
14:22 shyam joined #gluster
14:26 kdhananjay left #gluster
14:37 atm0sphere joined #gluster
14:37 susant left #gluster
14:39 vbellur joined #gluster
14:39 skylar joined #gluster
14:45 ira joined #gluster
14:46 atinm joined #gluster
14:49 mlhamburg joined #gluster
14:59 kpease joined #gluster
15:00 apandey joined #gluster
15:00 plarsen joined #gluster
15:01 kshlm Community meeting is running in #gluster-meeting
15:02 sbulage joined #gluster
15:10 jiffin joined #gluster
15:11 tallmocha joined #gluster
15:14 amye joined #gluster
15:46 farhorizon joined #gluster
15:54 chris349 joined #gluster
15:59 tallmocha joined #gluster
16:03 snehring joined #gluster
16:07 wushudoin joined #gluster
16:15 arpu joined #gluster
16:15 k0nsl joined #gluster
16:15 k0nsl joined #gluster
16:19 baber joined #gluster
16:27 buvanesh_kumar joined #gluster
16:31 major Technically shouldn't it be feasible to do distributed reads from replicated volumes?
16:38 riyas joined #gluster
16:47 jdossey joined #gluster
16:54 Shu6h3ndu joined #gluster
17:05 plarsen joined #gluster
17:06 shortdudey123 joined #gluster
17:10 ksandha_ joined #gluster
17:20 baber joined #gluster
17:22 jiffin joined #gluster
17:28 gnulnx joined #gluster
17:28 kpease joined #gluster
17:35 kraynor5b_ joined #gluster
17:37 derjohn_mob joined #gluster
17:38 kpease_ joined #gluster
17:47 SlickNik left #gluster
17:50 kpease joined #gluster
17:57 level7 joined #gluster
18:00 Wizek_ joined #gluster
18:00 skylar joined #gluster
18:00 level7_ joined #gluster
18:19 Karan joined #gluster
18:32 ikla joined #gluster
18:32 ic0n joined #gluster
18:36 major in the long run I would like to do a Gluster on a Dolphin network
18:37 d0nn1e joined #gluster
18:39 atinm joined #gluster
18:59 skylar joined #gluster
19:01 vico_ joined #gluster
19:13 moneylotion hey all, i was having some slowness issues with both small and large files - on two core nas boxes - are my cpus the issue?
19:14 moneylotion zfs backend, 8 gb ram, 1gbe
19:14 major what sort of file and file operations?
19:15 moneylotion i tried a large variety, everything from video, psds, owncloud, sort of a mix tested
19:16 moneylotion scanning directories with samba, netatalk, nginx, php-fpm, - even rsync took a long time to poll
19:16 major how many nodes/bricks and in what config?
19:17 major or was that 2 nodes as opposed to unknown number of 2-core nodes?
19:17 moneylotion two nodes, two bricks per volume. distribute, and replica.  distribute was a bit faster, but still sort of slow
19:17 ahino joined #gluster
19:17 moneylotion 2 nodes, each w/ 2 cores, 8gb ram, 1gbe lan
19:17 major gotcha
19:17 moneylotion would the cpu upgrade help?, or are there other issues in the chain?
19:17 major and you are testing from a 3rd system via NFS or a native gluster-client?
19:18 moneylotion didn't mess with reconsiguring any settings yet
19:18 moneylotion i tested from a 3rd system (slower)... most of my tests were done with samba and netatalk or rsync locally
19:18 moneylotion didn't try nfs
19:19 major hmm
19:19 major and single gigE?
19:19 moneylotion yep
19:20 moneylotion i have 5 drive raid 5 (zfs), so locally I have the io - I only assume I could get more bandwidth out of the network
19:20 major what sort of performance were you seeing that was slow to you?
19:20 moneylotion like 20 MB/s
19:20 major hmm .. yah .. thats slower than I would have expected
19:21 major everything running jumboframes?
19:21 moneylotion couldn't serve files at all from the gluster mount, the nginx server would just time out
19:21 moneylotion no jumbo frames
19:22 major double odd
19:22 moneylotion ???
19:22 moneylotion ohh double gotcha
19:22 snehring have you tested local performance just to be sure?
19:22 snehring like run fio against your zfs dataset or similar
19:23 moneylotion i did a dd /dev/zero to the mount, and got something similar to those speeds
19:23 Seth_Karlo joined #gluster
19:23 moneylotion no where near the 100-500 MB/s over zfs
19:23 snehring not a good test with zfs
19:24 snehring pretty sure it discards sparse 0s
19:24 moneylotion ohh sure
19:25 snehring fio's not a bad option, but if you want to do it with just dd you could generate a 1-10G blob of random data and then try copying that over to zfs
19:27 baber joined #gluster
19:28 major in my experience it has always been difficult to get better than ~115MB/sec over gigE regardless of what I am doing on it .. even straight rsync .. but 20MB/sec ... that feels like a huge miss...
19:28 snehring yeah something going on there for sure
19:29 major if its using small MTU's then the cpu could be spending most of its time context switching to service interrupts
19:29 snehring even default mtu should be able to do better than that
19:29 major but you would expect to see similar network penalties in doing an rsync from node to node
19:30 major or a giant 10G random data scp w/out compression and encryption
19:30 moneylotion with replica it needs to write twice, so theoretically lets say i get 50 MB/s
19:30 moneylotion *** running a test now
19:31 major this is back to having me wondering about doing distributed reads on replicated data .. it isn't like you can't have the client do open/seek on each copy of the file...
19:32 major which feels like a neat option for generally read-only data (movies, audio, etc)
19:35 moneylotion dd /dev/zero to straight zfs gets 397 MB/s, to gluster in replica gets 92.1 MB/s
19:36 major local gluster or over gigE?
19:36 moneylotion local gluster ( my only mount ) - so is there a lot of overhead for multiple mounts?
19:36 major so you mounted the gluster volume onto the same server?
19:37 moneylotion i was getting like 100 active tasks, for 10 or so mounts
19:37 moneylotion yeah
19:37 major kk
19:38 farhorizon joined #gluster
19:39 moneylotion @major, whats you typical configuration look like?
19:39 moneylotion * your
19:40 moneylotion -- and what are your results
19:41 major currently I am bringing up 3 storage nodes w/ 2 8TB drives each and dual 10G.  I am looking to be doing 2-way replication between bricks w/ a 4th node acting as an arbiter for the other .. 6 bricks?
19:41 ira joined #gluster
19:41 major its all a brand new network at this point .. and .. sadly .. I am 100 miles away from it and the last of the new hardware arrived yesterday.. ultra antsy about getting back home to play with it :)
19:41 major the OTHER network I operate on uses a different filessytem attached to infiniband .. so it isn't a good comparison
19:42 major so far as performance .. I couldn't give you any good numbers for my new stuff until tomorrow evening .. like .. closer to midnite US/Pacific :(
19:42 moneylotion do you think it would help to do a distributed replica server1:/mnt1 server1:/mn2 server2:/mn1 server2:/mn2
19:42 major ~2hr train ride back home
19:43 major mnt1 and mnt2 are physically different drives?
19:43 moneylotion different folders on the same raid
19:44 Gambit15 joined #gluster
19:44 major I don't think that would gain much in that config
19:44 moneylotion * 4 times the writes
19:44 major last I checked gluster will already do a sort of distributed read between replicas in that it is sort of opertunistic regarding which node it reads from
19:45 major so if your client is reading multiple files, then each file will likely be read from each of the nodes
19:45 major just depends on which node responds to the file request first
19:45 major which is .. in and of itself .. generally a good enough solution for most use cases
19:46 major your write to the gluster volume .. was that volume already in replication?
19:47 moneylotion i created a new volume "test", for the write (replica)
19:47 major so the write was still going across gigE?
19:48 major because 90MB/sec across gigE is not bad IMHO
19:48 major 720Mb/sec
19:48 moneylotion yeah i didn't monitor the network, so I can't tell if it cached or queued, or if the write was in real time
19:48 moneylotion yeah thats great
19:50 major maybe time copying a big fat ISO image to the gluster mount :)
19:51 major but yah .. writing to both the backends at the same time from a 3rd machine is going to require writing twice out the 1 port...
19:51 derjohn_mob joined #gluster
19:51 major can't do dual-gigE?
19:51 moneylotion copied all of my amd64 isos - getting like 700 mb/s on the network  :/
19:51 moneylotion getting a new switch with lacp at some point
19:52 * moneylotion thinks the problem that I am experiencing relates to having multiple mounts, and the sheer volume of files
19:53 major well .. if they are mostly images and such then it is likely a mroe relivant test to see what the read-performance is like vs the write performance
19:53 major all the syncronization, IP overhead, and client writing multiple copies in multiple directions .. thats gonna make write performance take a nose dive
19:55 major I think this is sort of where dispersed volumes come into play
19:57 farhoriz_ joined #gluster
19:58 farhori__ joined #gluster
19:59 msvbhat joined #gluster
20:00 farhoriz_ joined #gluster
20:00 major man .. now I want to go try really stupid and totally not sane configurations...
20:05 vbellur1 joined #gluster
20:07 mhulsman joined #gluster
20:08 vbellur1 left #gluster
20:08 baber joined #gluster
20:10 ashiq joined #gluster
20:11 major was the md-cache ever pushed into release?
20:12 major or rather .. was upcalling ever enabled by default in md-cache
20:29 kraynor5b joined #gluster
20:30 kraynor5b__ joined #gluster
20:41 kraynor5b joined #gluster
20:48 purpleid1a joined #gluster
20:49 owlbot` joined #gluster
20:49 mpingu joined #gluster
20:50 d-fence__ joined #gluster
20:52 ebbex_ joined #gluster
20:52 Bardack_ joined #gluster
20:52 nobody482 joined #gluster
20:55 pdrakewe_ joined #gluster
21:00 foster joined #gluster
21:01 joshin joined #gluster
21:01 joshin joined #gluster
21:05 farhorizon joined #gluster
21:07 PatNarciso joined #gluster
21:09 PatNarciso connection timeout?  lies.
21:26 poxbat joined #gluster
21:28 major has anyone ever tried to use gluster as the backend filesystem for gluster?
21:38 Vapez joined #gluster
21:38 Vapez joined #gluster
21:41 gem joined #gluster
21:50 shyam joined #gluster
21:52 pulli joined #gluster
21:58 vbellur joined #gluster
21:59 BitByteNybble110 Seeing this in the ganesha.log on one node of a three node cluster -> https://paste.fedoraproject.org/paste/RGp​nguTY2qMt2mva9Leh7F5M1UNdIGYhyRLivL9gydE=
21:59 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
21:59 BitByteNybble110 It seems to have stopped for the time being, but I'm wondering if it's something I need to be concerned about
22:00 BitByteNybble110 All nodes are running 3.9.1.
22:00 vbellur joined #gluster
22:01 vbellur joined #gluster
22:01 vbellur joined #gluster
22:02 vbellur joined #gluster
22:44 shyam joined #gluster
23:02 major okay .. config idea:
23:02 major gluster volume create test-volume stripe 3 replica 3 arbiter 1 \
23:02 major node1:/data/brick1 node2:/data/brick1 node4:/data/brick1 \
23:02 major node1:/data/brick2 node3:/data/brick1 node4:/data/brick2 \
23:02 major node2:/data/brick2 node3:/data/brick2 node4:/data/brick3
23:11 cliluw joined #gluster
23:20 jeffspeff joined #gluster
23:21 mlg9000 joined #gluster
23:30 misc joined #gluster
23:39 vinurs joined #gluster
23:39 shyam joined #gluster
23:49 jwd joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary