Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-08-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 overclk joined #gluster
00:12 plarsen joined #gluster
00:13 jrm16020 joined #gluster
00:21 mmckeen chan5n: there is a possibility of whatever is listening to inotify, given that it will then operate on the glusterfs volume, will find the file as seen to glusterfs in a state inconsistent with whatever brick fired the inotify event
00:21 mmckeen chan4n: this is because gluster fops return the result of whatever brick which returns first, assuming a read
00:22 mmckeen chan4n: and the bricks could possibly be inconsistent
00:24 elico joined #gluster
00:25 mmckeen chan4n: chan4n: you might be able to work around this by waiting for a duplicate inotify event on every brick before a fop to the cluster
00:26 mmckeen chan4n: but I have to think about this a little more before I decide that to be safe
00:26 mmckeen chan5n: ^ :)
00:32 mynam joined #gluster
00:49 mmckeen chan5n: also I believe that both nfs and fuse mounts work with inotify
00:49 mmckeen chan5n: might be better just to use another mount type other than libgfapi
00:53 gildub joined #gluster
01:09 dlambrig joined #gluster
01:10 elico joined #gluster
01:13 dlambrig joined #gluster
01:20 woakes070048 joined #gluster
01:23 gildub joined #gluster
01:24 julim joined #gluster
01:33 woakes070048 what do you guys recoomend to test my setup. it is three nodes replica over infinaband and im using it for ovirt 3.5
01:40 Lee1092 joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:50 dlambrig joined #gluster
01:55 nangthang joined #gluster
01:58 _Bryan_ joined #gluster
02:05 haomaiwa_ joined #gluster
02:13 dlambrig joined #gluster
02:15 baojg joined #gluster
02:16 victori joined #gluster
02:43 harish joined #gluster
02:47 gildub joined #gluster
02:48 baojg joined #gluster
02:55 badone_ joined #gluster
03:02 haomaiwa_ joined #gluster
03:09 autoditac__ joined #gluster
03:13 bennyturns joined #gluster
03:26 plarsen joined #gluster
03:30 bharata-rao joined #gluster
03:31 prg3 joined #gluster
03:36 [7] joined #gluster
03:36 shubhendu joined #gluster
03:36 pocketprotector joined #gluster
03:37 pocketprotector Hi, i have the feeling I am doing something wrong. I am having a difficult time installing glusterfs-server because its a package that is not readily available.
03:38 pocketprotector So I installed the gluster-epel repo, and i eventually got it working on two of the nodes in my 7 node cluster. But I did so much back tracking to get it to work now i cant figure out what did the trick.
03:40 kanagaraj joined #gluster
03:40 pocketprotector Does it make more sense to compile the source rpm and stick it in my epel6-x86_64 repo than trying to get glusterfs repo and epel6-x86 to work together? There are packages that are requirements to install the binary in glusterfs-epel located in the epel6-x64_64 repo..
03:42 dlambrig joined #gluster
03:42 itisravi joined #gluster
03:45 nthomas joined #gluster
03:46 nbalacha joined #gluster
03:48 jcastill1 joined #gluster
03:53 jcastillo joined #gluster
03:57 folivora_ joined #gluster
04:02 atinm joined #gluster
04:02 haomaiwa_ joined #gluster
04:03 dlambrig joined #gluster
04:05 kkeithley1 joined #gluster
04:07 kdhananjay joined #gluster
04:13 ppai joined #gluster
04:17 gem joined #gluster
04:17 gem_ joined #gluster
04:22 skoduri joined #gluster
04:23 ramteid joined #gluster
04:23 dusmant joined #gluster
04:23 yazhini joined #gluster
04:32 hgowtham joined #gluster
04:32 ashiq joined #gluster
04:37 neha joined #gluster
04:43 jiffin joined #gluster
04:46 ndarshan joined #gluster
04:52 deepakcs joined #gluster
04:53 Bhaskarakiran joined #gluster
05:00 meghanam joined #gluster
05:01 vimal joined #gluster
05:02 haomaiwang joined #gluster
05:04 sakshi joined #gluster
05:05 m0zes joined #gluster
05:07 aravindavk joined #gluster
05:10 Manikandan joined #gluster
05:11 poornimag joined #gluster
05:23 Bhaskarakiran joined #gluster
05:25 JPaul joined #gluster
05:32 kotreshhr joined #gluster
05:34 rafi joined #gluster
05:36 skoduri joined #gluster
05:39 atalur joined #gluster
05:39 yazhini joined #gluster
05:40 atalur joined #gluster
05:43 anil joined #gluster
05:43 vmallika joined #gluster
05:49 ramky joined #gluster
05:49 volga629 joined #gluster
05:50 volga629 Hello Everyone, how I can find correct link for documentation
05:50 volga629 https://gluster.readthedocs.org/en/lat​est/Features/mount_gluster_volume_usin
05:50 volga629 g_pnfs/
05:50 nbalacha joined #gluster
05:50 volga629 this link is not working
05:51 volga629 trying configure ganesha pnfs 2.1, but stuck with configuration and can't find correct options
05:54 ramky joined #gluster
05:56 anoopcs jiffin, ^^
05:58 Saravana_ joined #gluster
05:58 baojg joined #gluster
06:02 haomaiwang joined #gluster
06:03 raghu joined #gluster
06:06 kshlm joined #gluster
06:07 meghanam_ joined #gluster
06:14 ding2go joined #gluster
06:14 maveric_amitc_ joined #gluster
06:15 nthomas joined #gluster
06:15 arcolife joined #gluster
06:15 shubhendu joined #gluster
06:16 nangthang joined #gluster
06:16 ndarshan joined #gluster
06:19 hchiramm_ volga629, hi
06:19 volga629 Hello
06:19 glusterbot volga629: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:19 volga629 hi
06:19 glusterbot volga629: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:19 volga629 :-)
06:19 hchiramm_ recently the features has been changed to different repo
06:19 hchiramm_ https://github.com/gluster/glusterfs-specs
06:19 glusterbot Title: gluster/glusterfs-specs · GitHub (at github.com)
06:19 hchiramm_ volga629, ^^
06:20 hchiramm_ https://github.com/gluster/glusterfs-specs/blob/m​aster/Features/mount_gluster_volume_using_pnfs.md
06:20 glusterbot Title: glusterfs-specs/mount_gluster_volume_using_pnfs.md at master · gluster/glusterfs-specs · GitHub (at github.com)
06:20 hchiramm_ volga629, ^^
06:21 volga629 ok thank you going read right now
06:21 Bhaskarakiran joined #gluster
06:22 dlambrig joined #gluster
06:22 sripathi1 joined #gluster
06:23 hchiramm_ volga629, np
06:25 rafi joined #gluster
06:32 arcolife joined #gluster
06:33 jtux joined #gluster
06:34 Bhaskarakiran joined #gluster
06:39 kaushal_ joined #gluster
06:41 SeerKan joined #gluster
06:50 badone__ joined #gluster
06:54 ppai_ joined #gluster
06:54 chirino joined #gluster
06:56 atalur joined #gluster
07:00 sripathi1 joined #gluster
07:01 haomaiwa_ joined #gluster
07:07 kshlm joined #gluster
07:10 nbalacha joined #gluster
07:16 nthomas joined #gluster
07:16 archit_ joined #gluster
07:19 ndarshan joined #gluster
07:21 social joined #gluster
07:27 fsimonce joined #gluster
07:28 Alex31 joined #gluster
07:36 jwd joined #gluster
07:43 archit_ joined #gluster
07:45 atalur joined #gluster
07:48 shubhendu joined #gluster
07:48 ppai_ joined #gluster
07:50 tanuck joined #gluster
07:51 nangthang joined #gluster
07:52 Pupeno joined #gluster
07:52 wnlx joined #gluster
07:54 Trefex joined #gluster
07:55 ndarshan joined #gluster
07:55 RedW joined #gluster
07:57 archit__ joined #gluster
07:58 Norky joined #gluster
07:59 elico joined #gluster
08:00 ctria joined #gluster
08:02 haomaiwa_ joined #gluster
08:08 Slashman joined #gluster
08:11 deepakcs joined #gluster
08:21 timbyr_ joined #gluster
08:24 muneerse joined #gluster
08:30 sripathi2 joined #gluster
08:32 chan5n joined #gluster
08:55 shubhendu joined #gluster
09:02 haomaiwang joined #gluster
09:06 chan5n left #gluster
09:09 chan5n joined #gluster
09:14 social joined #gluster
09:14 _shaps_ joined #gluster
09:22 Alex31 rastar: hi :)
09:22 Alex31 rastar: I have upgrade in 3.6 over 4 debian server
09:23 Alex31 rastar: the time for a copy is very interesting with big file, but very very slow for small
09:25 tanuck_ joined #gluster
09:26 DV_ joined #gluster
09:28 doekia joined #gluster
09:28 tanuck__ joined #gluster
09:30 DV joined #gluster
09:32 Alex31 rastar: is there a way to copy on cache before copying directly on the GFS volume ?
09:39 jvandewege joined #gluster
09:42 yazhini joined #gluster
09:46 ppai_ joined #gluster
09:50 kovshenin joined #gluster
09:53 LebedevRI joined #gluster
10:04 haomaiwa_ joined #gluster
10:04 meghanam_ joined #gluster
10:05 shubhendu joined #gluster
10:08 kshlm joined #gluster
10:12 kshlm joined #gluster
10:12 kdhananjay1 joined #gluster
10:17 s19n joined #gluster
10:18 shakaran joined #gluster
10:18 poornimag joined #gluster
10:19 shakaran Hi, I have a server of 1TB filled with data and gluster and I would add a new server of 3TB as second storage, what should be the steps? There are some guide for add a new volume/brick?
10:20 primusinterpares joined #gluster
10:27 baojg joined #gluster
10:29 tanuck joined #gluster
10:29 dlambrig joined #gluster
10:32 skoduri shakaran, you may refer to this link - http://gluster.readthedocs.org/en/latest/Administr​ator%20Guide/Managing%20Volumes/#expanding-volumes
10:32 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.org)
10:33 shakaran skoduri, nice, I will read now :) very helpful
10:34 skoduri :)
10:43 woakes070048 joined #gluster
11:04 hchiramm_ skoduri++ thanks !
11:04 glusterbot hchiramm_: skoduri's karma is now 1
11:08 chirino joined #gluster
11:12 harish joined #gluster
11:12 atinm joined #gluster
11:16 jrm16020 joined #gluster
11:20 tanuck joined #gluster
11:22 tanuck joined #gluster
11:32 rafi1 joined #gluster
11:49 rjoseph joined #gluster
11:51 neha joined #gluster
11:52 meghanam_ REMINDER : Gluster Weekly  Community Meeting will start in ~10 minutes on #gluster-meeting.
11:53 yazhini joined #gluster
11:56 firemanxbr joined #gluster
11:59 plarsen joined #gluster
12:03 elico joined #gluster
12:04 jdarcy joined #gluster
12:06 rastar Alex31: still here?
12:06 gp joined #gluster
12:06 Alex31 rastar: yep
12:07 Guest4573 joined #gluster
12:07 rastar Alex31: gluster vol set help | grep io-cache
12:08 itisravi joined #gluster
12:08 rastar io-cache is a xlator(component on the client side) , it is part of smb process when smb is connected to gluster
12:09 unclemarc joined #gluster
12:10 julim joined #gluster
12:11 jtux joined #gluster
12:11 Alex31 rastar: ok, thks !
12:12 rastar Alex31: io-cache defaults are good if you do a lot of read same data kind of workloads
12:12 rjoseph joined #gluster
12:15 Alex31 rastar: ok, I understand... I'm going to play with it :)
12:16 rastar Alex31: io-cache stores data in cache while doing writes..But it you don't have a write-and-read-immedietely kind of workloads, it will just make writes slower
12:16 rastar so either increase the timeouts or disable io-cache for now and try
12:17 Alex31 writing more slowly it's for me, not really important... I think, a user can wait a few a seconds more for save his document, but, for open it, no...
12:18 Alex31 rastar: for example, a classic user action i to read quicly the name of the document, open it, and see if the document is the document on wich he want to work ...
12:19 Alex31 rastar: if not, he's going to close it, and open the next
12:20 Alex31 rastar: so, if the open time take 10 seconds, grrr, he's coming to be irritate
12:20 rastar Alex31: even for that io-cache does not help..
12:21 rastar its timeouts are for an application doing successive reads..user actions are slower than timeouts..but I get what your workloads are..
12:22 Alex31 rastar:  but ... actually, I have a very big difference time for listing a directory between the GFS  mount directly on the server and the same directory shared by samba
12:23 rastar Alex31: on server you would do a "ls -l" on command line and by Samba you mean windows UI?
12:23 Alex31 rastar: so 2 options: work and optimize samba option for accelerate the traitment, or work  on VFS and try to get the better response tile
12:24 Alex31 i do my test with a classic ls -l in the /data  and the dir command from the same server but with the samba client
12:24 rastar Alex31: Samba client?
12:25 rastar you are not using Windows?
12:25 Alex31 rastar:  not for the test
12:25 rastar Hmm..
12:25 rastar cifs kernel module uses only SMB 1.0 and it not that great for performance..
12:25 Alex31 rastar: for working directly on the server
12:25 Alex31 arf
12:25 B21956 joined #gluster
12:25 Alex31 ok
12:26 rastar Alex31:  smbclient is a bit better but is not a true indicative of what performance your end-users will see from Windows machines
12:26 Alex31 rastar: thanks you for this  information ....
12:26 Alex31 rastar: mhmm, yes sure !
12:27 rastar Alex31: Your workload will have only SMB access on files right?
12:28 rastar I mean no mount -t glusterfs or mount -t nfs of the same volume when you are using through SMB...
12:29 rastar Alex31: If that is the case, you can do some optimizations in smb.conf. You can turn off a. kernel share modes b. posix locking c. kernel oplocks.
12:34 Alex31 rastar: exactly. only samba access. maybe few modification by script for maintenance task but nothing important.
12:35 rastar Alex31: then those three changes ^^ will show some good performance benefits..
12:35 arcolife joined #gluster
12:36 plarsen joined #gluster
12:39 haomaiwa_ joined #gluster
12:41 elico joined #gluster
12:42 julim joined #gluster
12:43 gildub joined #gluster
12:47 Alex31 rastar: It seem samba 4 lock very strangely the file over GFS
12:47 Alex31 rastar: I'm going to test with samba 3
12:48 rastar Alex31: I would be interested to know what was strange..I use with Samba 4 and it works fine after I have "kernel share modes = no" in smb.conf
12:48 rastar Alex31: Samba 3 might be slower in listing of file
12:48 Alex31 rastar: for example, for the same document, it's appear to be open two time:
12:48 Alex31 1220         65534      DENY_WRITE 0x12019f    RDWR       LEVEL_II         /data   Informatique/procedures/doc​/prise_en_main_BlueMind.doc   Wed Aug 19 14:40:33 2015
12:48 Alex31 1220         65534      DENY_NONE  0x81        RDONLY     NONE             /data   Informatique/procedures/doc​/prise_en_main_BlueMind.doc   Wed Aug 19 14:40:34 2015
12:50 rastar Alex31: that can happen if two threads from the same application opened the file..
12:51 kkeithley1 joined #gluster
12:53 rastar Alex31: Or any two applications on same windows machine opened the file.
12:56 kaushal_ joined #gluster
12:56 Alex31 ok for that ... the option of  MS Word "open in RO netwrok file was actived .... now it's ok:
12:56 Alex31 1149         65534      DENY_WRITE 0x12019f    RDWR       NONE             /data   Informatique/procedures/doc/change​ment_mot_de_passe_Thunderbird.doc   Wed Aug 19 14:54:42 2015
12:58 Alex31 from windows, tfor he same file store on GFS and store on a local drive, I have moreless 10 seconds for open the first and 2 second for the second
13:00 Alex31 rastar: do you use the classic no_delay option too in smb ?
13:03 shyam joined #gluster
13:03 Alex31 arf... i just read the option kernel shares, etc... have to be configured by share
13:03 bennyturns joined #gluster
13:03 kotreshhr left #gluster
13:04 zutto joined #gluster
13:04 rastar Alex31: I have not tried no_delay option
13:07 aravindavk joined #gluster
13:08 atinm joined #gluster
13:08 chirino joined #gluster
13:08 vimal joined #gluster
13:12 ashiq joined #gluster
13:15 kotreshhr joined #gluster
13:19 skoduri joined #gluster
13:20 rafi joined #gluster
13:23 Alex31 rastar:  is it really the goog way :
13:24 Alex31 GFS Samba server name is SERVER01, i have mount the GFS volume on it self like this : mount.glusterfs SERVER01:/GFSVOLNEW /data
13:24 elico joined #gluster
13:24 ashiq joined #gluster
13:25 Alex31 but I have create the volume on each node like this:
13:25 Alex31 gluster volume create GFSVOLNEW replica 4 SERVER01:/mnt/GFSVOLNEW SERVER02:/mnt/GFSVOLNEW SERVER03:/mnt/GFSVOLNEW SERVER04:/mnt/GFSVOLNEW force
13:25 Alex31 so, on the same server, I can list the directory like this: ll /mnt/GFSVOLNEW/
13:26 Alex31 or like this : ll /data
13:26 Alex31 but it really faster directly like  ll /mnt/GFSVOLNEW/
13:27 spcmastertim joined #gluster
13:28 dgbaley joined #gluster
13:28 Alex31 rastar: waaaa, really, the difference is incredible!
13:28 mpietersen joined #gluster
13:28 spcmastertim joined #gluster
13:30 ppai_ joined #gluster
13:32 dgbaley Hey. I loaded up ~1000 home directories into a volume a few days ago. When I do heal info on them, two of the bricks list a lot of entries, and the number of entries hasn't gone down, despite requesting a heal full every so often. (I don't know if it's related but I can't get the NFS daemon to start for this volume either).
13:34 maveric_amitc_ joined #gluster
13:37 aaronott joined #gluster
13:38 klaxa|work joined #gluster
13:45 neofob joined #gluster
14:04 s19n dgbaley: what does 'gluster volume status' say?
14:06 dgbaley That there are no active tasks, that the bricks are up and listening on TCP, the self heal daemons are running, and NFS is not running
14:07 shubhendu joined #gluster
14:07 dgandhi joined #gluster
14:11 kbyrne joined #gluster
14:14 s19n dgbaley: anything relevant in the logs?
14:16 dgbaley I've been watching, not that I see, just client connects/disconnects. I'm trying uneducated things now. I have the volume stopped and I'm rsyncing one brick to the other
14:17 pdrakeweb joined #gluster
14:28 kshlm joined #gluster
14:32 itisravi joined #gluster
14:33 deepakcs joined #gluster
14:34 maveric_amitc_ joined #gluster
14:34 Philambdo joined #gluster
14:35 nthomas joined #gluster
14:35 itisravi_ joined #gluster
14:44 s19n dgbaley: are you syncing extended attributes as well?
14:47 dgbaley yes, should I have not been?
14:47 dgbaley I'm not syncing .glusterfs
14:50 _Bryan_ joined #gluster
15:01 p8952 Has anyone seen anything like this before? https://gist.github.com/p8952/b0ffdf5c6a038c7ed789
15:01 glusterbot Title: gluster · GitHub (at gist.github.com)
15:09 cyberswat joined #gluster
15:09 s19n dgbaley: I'm not sure
15:12 _Bryan_ joined #gluster
15:13 _maserati joined #gluster
15:14 mckaymatt joined #gluster
15:22 calavera joined #gluster
15:30 tanuck joined #gluster
15:38 wushudoin| joined #gluster
15:39 itisravi p8952: writes happen via the client (mount) process, not glusterd.
15:41 p8952 itisravi, thanks for the clarification. But the issue remains, that if glusterfsd/1 is in communication with glusterd/2 when gfs02 is powered off, you lose access to the mount point for X seconds
15:42 p8952 While if it's in communication with glusterd/1 you do not
15:45 atinm joined #gluster
15:48 mpietersen joined #gluster
15:53 mikemol joined #gluster
15:53 mikemol OK, so I'm in a pickle. I meant to add a brick to a volume, and then remove a different brick from the same volume.
15:54 mikemol I inadvertently missed the "add-brick" step before starting the remove-brick step. There is not enough space on the remaining volumes to store all the data, so there's no way for the rebalance to complete.
15:54 cholcombe joined #gluster
15:54 calavera joined #gluster
15:55 mikemol I stopped the remove-brick, but gluster won't let me add the other brick, saying there's a rebalance in progress. But 'gluster volume remove-brick status' and 'gluster volume rebalance status' don't report any activity underway.
15:56 mikemol So...I'm at something of a loss for what to do; there's not enough space to let the rebalance complete, but I can't seem to back out from shrinking the volume, nor can I seem to expand it.
15:57 s19n what about stopping the volume, checking /var/lib/glusterd/vols/<volname>/node_state.info on all nodes, and then restarting it?
15:59 mikemol "volume stop: volname: failed: Staging failed on (brick). Error: Rebalance session is in progress for the volume 'volname'
16:03 s19n I imagine 'gluster volume rebalance <volname> stop' does not work either
16:04 mikemol Correct: "volume rebalance: <volname>: failed: Rebalance not started."
16:05 mikemol Sigh.
16:07 mikemol I have two bricks I'm waiting to add to this thing. I guess I could create a volume out of just those bricks, move the data over, destroy and rebuild the old bricks, and them to the new volume.
16:07 mikemol Once, just once, I'd like an expansion of this volume to go as planned.
16:10 ipmango joined #gluster
16:10 Philambdo joined #gluster
16:11 coredump joined #gluster
16:12 s19n Without other more useful suggestions, I'd try stopping all glusterd/glusterfsd daemons on all nodes and start them again.
16:16 mikemol node_state.info still reports that there's a rebalance going on, but that rebalance_op-0=0 and that the rebalance-id UUID is all-0s.
16:22 chuz04arley joined #gluster
16:22 calavera joined #gluster
16:24 s19n who knows if node_state.info can be mangled and if it has any effect on the daemons?
16:26 skoduri joined #gluster
16:27 mikemol I'm just going to blow the brick metadata away, and call the volume structure lost. The underlying data isn't lost, and is still identifiable, so it's not lost. I'll just mv it to the new gluster volume.
16:32 cabillman joined #gluster
16:33 SmithyUK joined #gluster
16:37 jdossey joined #gluster
16:38 DV_ joined #gluster
16:44 trav408 joined #gluster
16:47 calisto joined #gluster
16:47 Guest85635 left #gluster
16:49 trav408 joined #gluster
16:50 DV joined #gluster
16:53 mikemol Whelp. That seems to have done it. Moved the old brick data out of the way, created a volume with all of the bricks, fuse-mounted it, and am now mv'ing all the data back into place.
16:55 Rapture joined #gluster
16:57 mpietersen joined #gluster
16:58 calavera joined #gluster
17:00 jrm16020 joined #gluster
17:30 gem joined #gluster
17:44 elico joined #gluster
17:45 spcmastertim joined #gluster
17:46 pdrakeweb joined #gluster
17:55 timotheus1_ joined #gluster
17:55 ekman- joined #gluster
18:03 Twistedgrim joined #gluster
18:12 calavera joined #gluster
18:18 mckaymatt joined #gluster
18:27 dlambrig joined #gluster
18:35 jrm16020 joined #gluster
18:53 Twistedgrim joined #gluster
18:55 Twistedgrim1 joined #gluster
19:06 mckaymatt joined #gluster
19:10 _Bryan_ joined #gluster
19:15 jwd joined #gluster
19:15 d-fence joined #gluster
19:24 autoditac__ joined #gluster
19:38 ipmango joined #gluster
19:42 jcastill1 joined #gluster
19:46 jcastillo joined #gluster
19:52 elico joined #gluster
19:58 Trefex joined #gluster
20:07 jrm16020 joined #gluster
20:09 mckaymatt joined #gluster
20:12 cyberbootje joined #gluster
20:28 afics joined #gluster
20:30 B21956 joined #gluster
20:32 telmich joined #gluster
20:32 telmich good evening
20:32 telmich we had an outage on one host/brick and would like to connect it back to another brick; is there any way to control / monitor the replication / resynchronisation?
20:34 telmich it was started once automatically and we had to stop synchronisation, as the cluster was unusable performance wise
20:45 elico joined #gluster
21:17 coredump joined #gluster
21:17 badone_ joined #gluster
21:56 _maserati joined #gluster
22:15 jcastill1 joined #gluster
22:18 wushudoin joined #gluster
22:20 jcastillo joined #gluster
22:22 timotheus1 joined #gluster
22:23 gildub joined #gluster
22:23 timotheus1__ joined #gluster
22:27 timotheus1 joined #gluster
22:30 Twistedgrim joined #gluster
22:39 elico joined #gluster
22:55 wushudoin| joined #gluster
23:00 wushudoin| joined #gluster
23:12 mrEriksson joined #gluster
23:17 coredump joined #gluster
23:20 harold joined #gluster
23:20 harold olim, back. How dids the cifs tuff go?
23:23 msciciel1 joined #gluster
23:26 corretico joined #gluster
23:30 muneerse2 joined #gluster
23:40 elico joined #gluster
23:56 nishanth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary