Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-01-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 JoeJulian Caveat4U: "peer probe" doesn't rename a thing. It simply adds new names to existing peers.
00:03 Caveat4U joined #gluster
00:04 JoeJulian Caveat4U: Short of deleting and recreating a volume with new hostnames, stopping your volume and glusterd and replacing that hostname under /var/lib/glusterd/** is the only solution for now.
00:05 JoeJulian Caveat4U: I know the devs are working on "in-place" replace-brick though.
00:30 jdossey joined #gluster
00:47 susant joined #gluster
00:57 arif-ali joined #gluster
01:10 vbellur joined #gluster
01:12 susant joined #gluster
01:14 Gugge joined #gluster
01:14 martinetd joined #gluster
01:14 Anarka_ joined #gluster
01:14 ws2k3_ joined #gluster
01:14 Klas joined #gluster
01:14 ndevos joined #gluster
01:14 partner joined #gluster
01:14 gnulnx joined #gluster
01:14 abyss^ joined #gluster
01:14 thatgraemeguy joined #gluster
01:14 ebbex joined #gluster
01:14 legreffier joined #gluster
01:14 JPaul joined #gluster
01:14 side_control joined #gluster
01:14 JoeJulian joined #gluster
01:14 bitchecker joined #gluster
01:14 d-fence_ joined #gluster
01:14 colm joined #gluster
01:14 foster joined #gluster
01:14 JonathanD joined #gluster
01:14 ndevos joined #gluster
01:14 d4n13L joined #gluster
01:14 fus joined #gluster
01:14 arif-ali joined #gluster
01:14 thatgraemeguy joined #gluster
01:14 moss joined #gluster
01:15 Utoxin joined #gluster
01:15 Larsen_ joined #gluster
01:15 morse joined #gluster
01:15 BlackoutWNCT joined #gluster
01:16 john51 joined #gluster
01:16 Vaizki joined #gluster
01:16 kshlm joined #gluster
01:16 varesa joined #gluster
01:17 anoopcs joined #gluster
01:17 yawkat joined #gluster
01:18 samikshan joined #gluster
01:22 siel joined #gluster
01:26 davidj joined #gluster
01:26 fyxim joined #gluster
01:27 PotatoGim joined #gluster
01:31 fargox joined #gluster
01:39 daMaestro joined #gluster
01:50 tdasilva joined #gluster
01:59 ankitraj joined #gluster
02:03 cliluw joined #gluster
02:06 Chinorro joined #gluster
02:09 shyam joined #gluster
02:25 daMaestro joined #gluster
02:39 shdeng joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:56 derjohn_mob joined #gluster
03:00 phileas joined #gluster
03:00 BTheOne joined #gluster
03:02 BTheOne My /var partition filled up, and after restarting glusterd it won't start back up again. Has anyone run into this issue before?
03:03 plarsen joined #gluster
03:07 buvanesh_kumar_ joined #gluster
03:09 victori joined #gluster
03:13 Gnomethrower joined #gluster
03:16 BTheOne Seems related: gluster volume set testvol diagnostics.client-log-level ERROR
03:17 kramdoss_ joined #gluster
03:17 BTheOne https://bugzilla.redhat.com/show_bug.cgi?id=858732 *
03:17 glusterbot Bug 858732: high, medium, ---, bugs, CLOSED EOL, glusterd does not start anymore on one node
03:17 ahino joined #gluster
03:31 BTheOne Alright, was missing a peer under /var/lib/glusterd/peers. Was able to reconstruct the missing file from another node.
03:31 BTheOne It starts now.
03:36 Wizek_ joined #gluster
03:43 buvanesh_kumar joined #gluster
03:48 atinm_ joined #gluster
03:49 nbalacha joined #gluster
03:49 victori joined #gluster
03:50 atinm joined #gluster
03:52 gyadav joined #gluster
03:54 Karan joined #gluster
04:03 itisravi joined #gluster
04:09 nishanth joined #gluster
04:10 susant joined #gluster
04:10 susant left #gluster
04:18 Shu6h3ndu joined #gluster
04:23 ankitraj joined #gluster
04:30 kdhananjay joined #gluster
04:32 AnkitRaj_ joined #gluster
04:35 gyadav_ joined #gluster
04:36 ankitraj joined #gluster
04:40 kotreshhr joined #gluster
04:45 kramdoss_ joined #gluster
04:46 sanoj joined #gluster
04:49 rafi joined #gluster
04:56 karthik_us joined #gluster
04:56 jiffin joined #gluster
05:03 Caveat4U joined #gluster
05:08 Shu6h3ndu_ joined #gluster
05:08 gyadav joined #gluster
05:10 susant joined #gluster
05:10 ndarshan joined #gluster
05:11 susant left #gluster
05:13 victori joined #gluster
05:18 k4n0 joined #gluster
05:25 ppai joined #gluster
05:29 riyas joined #gluster
05:29 aravindavk joined #gluster
05:31 poornima joined #gluster
05:32 bbooth joined #gluster
05:33 jdossey joined #gluster
05:34 sbulage joined #gluster
05:36 Prasad joined #gluster
05:39 skoduri joined #gluster
05:44 asriram joined #gluster
05:45 sankarshan joined #gluster
05:46 hgowtham joined #gluster
05:46 sankarshan joined #gluster
05:49 prasanth joined #gluster
05:55 sona joined #gluster
05:56 shdeng joined #gluster
05:59 susant joined #gluster
06:09 susant joined #gluster
06:10 DV__ joined #gluster
06:11 k4n0 joined #gluster
06:13 Karan joined #gluster
06:21 kramdoss_ joined #gluster
06:24 rjoseph joined #gluster
06:27 Jacob843 joined #gluster
06:31 msvbhat joined #gluster
06:38 victori joined #gluster
06:47 kramdoss_ joined #gluster
06:49 ashiq joined #gluster
06:56 sona joined #gluster
07:00 [diablo] joined #gluster
07:03 sbulage joined #gluster
07:12 Gnomethrower joined #gluster
07:18 mhulsman joined #gluster
07:19 asriram joined #gluster
07:22 jtux joined #gluster
07:26 rastar joined #gluster
07:33 msvbhat joined #gluster
07:38 bbooth joined #gluster
07:40 mbukatov joined #gluster
07:49 shdeng joined #gluster
07:51 inodb joined #gluster
07:53 sona joined #gluster
08:15 victori joined #gluster
08:22 BlackoutWNCT joined #gluster
08:22 ankitraj joined #gluster
08:26 JonathanD joined #gluster
08:26 devyani7_ joined #gluster
08:26 nobody481 joined #gluster
08:26 saltsa joined #gluster
08:27 wistof_ joined #gluster
08:29 yosafbridge joined #gluster
08:30 portante joined #gluster
08:32 loadtheacc joined #gluster
08:33 k4n0 joined #gluster
08:33 sanoj joined #gluster
08:34 Champi joined #gluster
08:37 mhulsman joined #gluster
08:37 pkalever joined #gluster
08:37 ndarshan joined #gluster
08:37 rastar joined #gluster
08:37 Gugge joined #gluster
08:37 Intensity joined #gluster
08:37 raginbajin joined #gluster
08:37 Igel joined #gluster
08:37 soloslinger joined #gluster
08:37 gls joined #gluster
08:37 eryc joined #gluster
08:37 pasik joined #gluster
08:37 LiftedKilt joined #gluster
08:37 billputer joined #gluster
08:37 ItsMe` joined #gluster
08:37 d-fence joined #gluster
08:37 p7mo joined #gluster
08:37 jvandewege joined #gluster
08:37 Limebyte joined #gluster
08:37 klaas joined #gluster
08:37 pioto joined #gluster
08:37 Acinonyx joined #gluster
08:37 yalu joined #gluster
08:37 shortdudey123 joined #gluster
08:37 misc joined #gluster
08:37 tg2_ joined #gluster
08:37 mlhess joined #gluster
08:37 swebb joined #gluster
08:37 Vaelatern joined #gluster
08:37 crag joined #gluster
08:37 mrEriksson joined #gluster
08:37 yoavz joined #gluster
08:37 lalatenduM joined #gluster
08:37 rofl____ joined #gluster
08:37 kenansulayman joined #gluster
08:37 wiza joined #gluster
08:37 primusinterpare1 joined #gluster
08:37 glusterbot joined #gluster
08:37 logan- joined #gluster
08:37 rideh joined #gluster
08:37 Ramereth joined #gluster
08:37 cloph_away joined #gluster
08:37 suliba joined #gluster
08:37 percevalbot joined #gluster
08:37 koma joined #gluster
08:37 squeakyneb joined #gluster
08:37 l2__ joined #gluster
08:37 semiosis joined #gluster
08:37 Telsin joined #gluster
08:37 lucasrolff joined #gluster
08:37 dgandhi joined #gluster
08:37 jesk joined #gluster
08:37 snixor joined #gluster
08:37 cvstealt1 joined #gluster
08:37 masber joined #gluster
08:37 bhakti joined #gluster
08:37 snehring joined #gluster
08:37 [o__o] joined #gluster
08:37 Dave joined #gluster
08:37 jockek joined #gluster
08:37 Arrfab joined #gluster
08:37 nixpanic joined #gluster
08:37 ic0n_ joined #gluster
08:37 NuxRo joined #gluster
08:37 juhaj joined #gluster
08:37 decay joined #gluster
08:37 javi404 joined #gluster
08:37 uebera|| joined #gluster
08:37 shruti joined #gluster
08:37 aronnax joined #gluster
08:37 jerrcs_ joined #gluster
08:37 coredumb joined #gluster
08:37 Trefex joined #gluster
08:37 rossdm joined #gluster
08:37 ketarax joined #gluster
08:37 scc joined #gluster
08:37 zerick joined #gluster
08:38 DV__ joined #gluster
08:38 susant joined #gluster
08:38 ankitraj joined #gluster
08:38 devyani7_ joined #gluster
08:38 portante joined #gluster
08:38 loadtheacc joined #gluster
08:38 irated joined #gluster
08:39 irated joined #gluster
08:39 pkalever joined #gluster
08:39 ndarshan joined #gluster
08:39 rastar joined #gluster
08:39 susant joined #gluster
08:39 xMopxShell joined #gluster
08:39 thwam joined #gluster
08:42 TvL2386 joined #gluster
08:47 f0rpaxe joined #gluster
08:47 telius joined #gluster
08:47 PotatoGim joined #gluster
08:50 fyxim joined #gluster
08:50 twisted` joined #gluster
08:51 billputer joined #gluster
08:51 rafi1 joined #gluster
08:52 TvL2386 hey guys, I have a replicate test cluster here... I want to test how to remove and add a brick
08:53 TvL2386 in a separate terminal I have a `while true ; do find -type f ; done` running listing all files...
08:53 TvL2386 I notice my glusterfs client has an established tcp connection to both glusterfs-server nodes
08:53 jri joined #gluster
08:53 TvL2386 when I remove one brick, I notice the tcp connection goes down to that node
08:54 TvL2386 and everything continues perfectly
08:54 TvL2386 great!
08:54 TvL2386 then I stop glustersfs-server on that node that has been removed
08:54 TvL2386 reformat the partition the brick is on
08:54 TvL2386 mount it again
08:54 TvL2386 and start glusterfs-server again
08:54 TvL2386 great... so far everything is fine
08:55 TvL2386 then I add the brick again: `gluster volume add-brick volume01 replica 2 10.0.0.12:/glusterfs/disk01/volume01`
08:55 AppStore joined #gluster
08:56 susant joined #gluster
08:56 TvL2386 as soon as I hit enter, the `find` lists no files anymore... it's not hanging... I can do `ls` and it just shows the glusterfs-client mount point is empty
08:56 sona joined #gluster
08:57 sanoj joined #gluster
08:58 nbalacha joined #gluster
08:58 Ulrar joined #gluster
08:59 atinm_ joined #gluster
09:00 coredumb joined #gluster
09:04 kramdoss_ joined #gluster
09:05 Caveat4U joined #gluster
09:06 karthik_us joined #gluster
09:10 ashiq joined #gluster
09:16 asriram joined #gluster
09:17 gem joined #gluster
09:23 gem joined #gluster
09:26 kramdoss_ joined #gluster
09:30 rafi joined #gluster
09:32 derjohn_mob joined #gluster
09:40 kdhananjay joined #gluster
09:45 [diablo] joined #gluster
09:50 karthik_us joined #gluster
09:53 flying joined #gluster
09:56 rafi1 joined #gluster
09:57 legreffier joined #gluster
10:02 victori joined #gluster
10:03 Lee1092 joined #gluster
10:04 mhulsman joined #gluster
10:05 devyani7 joined #gluster
10:24 atinm_ joined #gluster
10:25 asriram joined #gluster
10:26 nbalacha joined #gluster
10:43 nishanth joined #gluster
10:46 jdossey joined #gluster
10:46 devyani7 joined #gluster
10:47 rafi joined #gluster
10:50 itisravi joined #gluster
10:52 victori joined #gluster
10:58 susant joined #gluster
11:00 karthik_us joined #gluster
11:02 atinmu joined #gluster
11:09 jdossey joined #gluster
11:11 SeerKan joined #gluster
11:11 SeerKan Hello giys
11:11 SeerKan *guys
11:12 SeerKan I currently have a cluster created with 2 servers, 1 brick on each and they are replicated
11:13 SeerKan I need to add some more space, so I would like to add one more partition/brick on each server, those 2 still replicated between them and add space to the same mount
11:13 SeerKan is that possible ?
11:19 msvbhat joined #gluster
11:20 Wizek_ joined #gluster
11:30 pulli joined #gluster
11:36 Slashman joined #gluster
11:37 buvanesh_kumar joined #gluster
11:37 jtux joined #gluster
11:38 Ulrar Is there a problem in running "gluster volume profile VMs info cumulative
11:38 Ulrar " at the same time on multiple bricks ?
11:42 kdhananjay Ulrar: none except for glusterd locks contention
11:43 kdhananjay Ulrar: which is true with running any two gluster commands in parallel (I suppose if they're on the same volume, and not otherwise)
11:47 Ulrar kdhananjay: It's strange, my graphs are working fine as long as only one is running, not sure why
11:47 Ulrar I'm adding some debugging into my plugin to try and get  the error
11:47 Ulrar What's even weirder is that on different servers,everything works fine even with 3 running at the same time
11:48 Ulrar Might be a difference between 3.7.12 and 3.7.15 ..
11:48 [diablo] joined #gluster
11:48 kovshenin joined #gluster
11:48 kdhananjay Ulrar: does running the command give out an error like "Another transaction is in progress ... "
11:48 itisravi joined #gluster
11:48 kdhananjay ?
11:49 Ulrar Never got that by hand, not sure what the plugin is seeing. I'm uploading a version with some debug in it, I'll check
11:53 Ulrar kdhananjay: Do you think that's the best way to get the read and write on a volume btw ?
11:53 Ulrar I'm using it to graph input / output on the volume
11:57 Ulrar Yeah, looks like it might be "Another transaction is in progress ..."
11:57 gem joined #gluster
11:58 Ulrar Strange I don't have that problem on 3.7.15
12:00 kdhananjay Ulrar: i didnt get your point about "read and write". can you be more specific?
12:02 Ulrar kdhananjay: I'm using the "Data Read: 8465820306944 bytes" and "Data Written: 66524934379701 bytes" to calculate the current b/s being read and written on a volume
12:03 Ulrar Which works fine, just wondering if there is an easier way to get that
12:05 derjohn_mob joined #gluster
12:05 kdhananjay Ulrar: hmm this idea works and its pretty cool actually. :) but why would you want to run it on multiple nodes?
12:06 kdhananjay Ulrar: one execution of 'volume profile ...' will hit all bricks and collect data from all bricks.
12:06 kdhananjay Ulrar: wait, i think there is a way to do it through .meta
12:06 kdhananjay let me check
12:06 kdhananjay Ulrar: although i doubt it will work with libgfapi
12:08 susant left #gluster
12:08 kdhananjay Ulrar: are you using fuse? or gfapi?
12:10 jdossey joined #gluster
12:10 Ulrar kdhananjay: Yeah I'm using gfapi, it's proxmox nodes
12:11 Ulrar And I don't really need to do it on multiple nodes, it's just conveniant
12:11 Ulrar For now I've disabled it on all nodes but one, works fine but it's a bit harder to have cool graphs on all nodes
12:12 Ulrar We're currently testing grafana so I made a little telegraf plugin for glusterfs, the result is pretty nice I have to say
12:15 kdhananjay Ulrar: oh like what kind of metrics?
12:17 Ulrar kdhananjay: For now just b/s being read and written (seemed the easiest and most interesting to start), I'll probably add other stuff if we actually keep grafana. I'm thinking latency would be nice to track since we are hosting VMs on gluster
12:18 Ulrar But the profiler gives a lot of info and I'm not sure I understand everything for now :)
12:18 sona joined #gluster
12:20 nishanth joined #gluster
12:21 kdhananjay Ulrar: cool, sounds interesting. :)
12:22 Ulrar Yeah, the only thing I'm not really happy with right now is that I have to use sudo to run the gluster command
12:22 Ulrar But I don't see a way around that
12:23 ibotty joined #gluster
12:23 ibotty left #gluster
12:24 Ulrar And, well, the fact that telegraf doesn't seem to be accepting any PR right now :)
12:25 ibotty joined #gluster
12:26 ibotty Hi, I have a strange phenomena. Using Gluster 3.8 some frequently accessed files _on some mounts only_ become inaccessible.
12:26 ibotty the files can be accessed on the bricks, and on other mounts
12:26 ibotty they can be stat'ed, but can't be read (i.e. cat stalls)
12:27 ibotty this is gluster mounted via kubernetes
12:28 ibotty If I recreate the file (with the same content), I can access the file perfectly again
12:29 sbulage joined #gluster
12:39 jri joined #gluster
12:41 atinmu joined #gluster
12:48 aravindavk joined #gluster
12:49 victori joined #gluster
12:58 Caveat4U joined #gluster
13:11 jdossey joined #gluster
13:12 nbalacha joined #gluster
13:13 unclemarc joined #gluster
13:20 victori joined #gluster
13:23 susant joined #gluster
13:24 plarsen joined #gluster
13:25 msvbhat joined #gluster
13:31 susant left #gluster
13:40 kotreshhr joined #gluster
13:50 victori joined #gluster
13:58 jkroon joined #gluster
14:04 ira joined #gluster
14:09 bluenemo joined #gluster
14:12 jdossey joined #gluster
14:12 jiffin joined #gluster
14:14 hybrid512 joined #gluster
14:27 jiffin1 joined #gluster
14:28 jkroon joined #gluster
14:29 cloph_away Hi * - any chance to get a "stable" link for the nfs-ganesha repo at https://download.gluster.org/pub/gluster/nfs-ganesha/ ? So far the link would have to contain the full version, thus not getting the bugfix releases unless one adjusts the links in the sources definition. Would it be possible to have 2.4 directory instead, simliar to how it's done for the gluster packages?
14:29 glusterbot Title: Index of /pub/gluster/nfs-ganesha (at download.gluster.org)
14:30 kpease joined #gluster
14:30 squizzi joined #gluster
14:30 kpease_ joined #gluster
14:34 shaunm joined #gluster
14:38 kkeithley cloph_away:  the only packages there today are for Debian.  Get the latest nfs-ganesha bits for  Fedora from Fedora (fedora-updates or fedora-updates-testing)
14:38 kkeithley Ubuntu from Launchpad PPA at https://launchpad.net/~gluster
14:38 glusterbot Title: Gluster in Launchpad (at launchpad.net)
14:39 kkeithley RHEL and CentOS from the CentOS Storage SIG.
14:39 saltsa joined #gluster
14:39 kkeithley There's a README file there that explains all this.
14:39 cloph_away Oh, misunderstanding - I meant for the debian version. for gluster  there's a 3.8 / LATEST combo that can be used to update to 3.8.x bugfix releases, but for ganesha it is a fixed link that needs to be manually changed whenever there's an update.
14:40 cloph_away i.e. now it is 2.4.1 , but if 2.4.2 is released, the link has to be manually changed in sources.list.d/whatever to 2.4.2
14:40 kkeithley sure, I'll do that going forward
14:41 cloph_away thx a lot :-)
14:42 kkeithley it's there now
14:42 victori joined #gluster
14:44 kkeithley nice to know that people are actually using it. ;-)
14:44 cloph_away :-)
14:45 cloph_away (not using any of ganesha's HA features though)
14:46 kkeithley If your clients aren't doing locking then they don't need lock recovery.
14:47 Ulrar Noticed yesterday it was possible to install ganesha on debian, I will probably test it one of these days
14:47 kkeithley And I don't know what the state/status of pacemaker and corosync is on Debian so that could be a real crapshoot
14:48 Ulrar There was a couple of times last year I wished ganesha was available for debian
14:48 gem joined #gluster
14:48 kkeithley let us know when you do test it. Feedback is always good
14:49 Ulrar Sure !
14:49 kkeithley and yeah, getting .deb builds working was an exercise
14:49 d0nn1e joined #gluster
14:50 Ulrar Oh, I'm sure :). I'm looking at packaging stuff for us and it's seem complicated for no reason
14:50 skylar joined #gluster
14:50 Ulrar Guess I'm too used to Gentoo's ebuild system
14:50 kkeithley *painful* exercise
14:51 * cloph_away also never understood why people praise apt/deb... Compared to urpmi or yum for that matter it is a total mess :-/
14:52 Ulrar Can't really comment on that myself, I've never really used any yum based distro. The rare contacts I had with that are CentOS, which annoys me for being even less up to date than debian so .. :)
14:53 sanoj|afk joined #gluster
14:53 kkeithley uh, yeah. I would never think of CentOS as having the latest of anything. Fedora is much more bleeding edge
14:54 kkeithley When I look at what's in Debian Stretch even, compared to Fedora, I wonder why Stretch is so far behind.
14:54 kkeithley sometimes
14:54 Ulrar Never used stretch, I basically use debian only at work, so it's always the stable release
14:55 * cloph_away 's happy with debian on server, and Mageia on my own machines
14:56 jkroon joined #gluster
14:57 * kkeithley only uses Debian to build gluster and ganesha pkgs
14:57 Ulrar Well, we apreciate that !
14:58 cloph_away +1
15:01 kkeithley I'm never really sure. CentOS (and RHEL) packages can be deceptive. They may look like old versions of things, but they often have fixes backported from later releases
15:02 saltsa joined #gluster
15:02 kkeithley the kernel in particular.
15:02 Ulrar Yeah, but when you're trying to copy the usual config it might not load because the version is that old
15:02 kkeithley yup
15:02 Ulrar Guess that's not a problem if you have only CentOS everywhere
15:12 jdossey joined #gluster
15:16 victori joined #gluster
15:16 john51_ joined #gluster
15:17 nbalacha joined #gluster
15:23 ankitraj joined #gluster
15:40 annettec joined #gluster
15:44 orogor i bet nobody use urbackup ?
15:44 orogor it just hangs on backuping gluster fuse fs
15:44 jdossey joined #gluster
15:45 rwheeler joined #gluster
15:45 farhorizon joined #gluster
15:49 TvL2386 I'm not using it :)
15:49 Dave joined #gluster
15:57 susant joined #gluster
16:01 ashka joined #gluster
16:01 ashka joined #gluster
16:02 farhorizon joined #gluster
16:03 farhoriz_ joined #gluster
16:08 Philambdo joined #gluster
16:11 Caveat4U_ joined #gluster
16:23 primehaxor joined #gluster
16:41 jdossey joined #gluster
16:42 victori joined #gluster
16:49 vbellur joined #gluster
16:55 JesperA joined #gluster
17:18 farhorizon joined #gluster
17:20 [diablo] joined #gluster
17:23 Caveat4U joined #gluster
17:24 Caveat4U joined #gluster
17:24 farhorizon joined #gluster
17:27 Caveat4U_ joined #gluster
17:42 kotreshhr left #gluster
17:42 Caveat4U joined #gluster
17:54 Philambdo joined #gluster
17:58 Caveat4U joined #gluster
17:59 Caveat4U joined #gluster
18:09 Caveat4U joined #gluster
18:16 mhulsman joined #gluster
18:18 rastar joined #gluster
18:22 skoduri joined #gluster
18:33 bbooth joined #gluster
18:38 msvbhat joined #gluster
19:10 Caveat4U joined #gluster
19:11 loadtheacc joined #gluster
19:12 Caveat4U_ joined #gluster
19:12 mhulsman joined #gluster
19:19 Asako joined #gluster
19:21 Gugge joined #gluster
19:22 p7mo_ joined #gluster
19:22 pioto_ joined #gluster
19:22 pasik_ joined #gluster
19:23 rwheeler joined #gluster
19:23 jvandewege joined #gluster
19:26 swebb_ joined #gluster
19:50 cliluw joined #gluster
20:05 mhulsman joined #gluster
20:06 farhorizon joined #gluster
20:10 Asako I think I blew up gluster
20:12 Caveat4U joined #gluster
20:21 Asako can I replace a brick while the cluster is healing?
20:25 Asako and how do I monitor healing progress?
20:31 Caveat4U joined #gluster
20:34 farhorizon joined #gluster
20:52 shaunm joined #gluster
21:10 bbooth joined #gluster
21:12 tdasilva joined #gluster
21:18 farhorizon joined #gluster
21:24 farhorizon joined #gluster
21:25 farhoriz_ joined #gluster
21:27 vbellur joined #gluster
21:32 farhorizon joined #gluster
21:56 pulli joined #gluster
22:07 Caveat4U_ joined #gluster
22:23 pulli joined #gluster
22:24 PatNarciso joined #gluster
22:30 Caveat4U joined #gluster
22:34 mhulsman joined #gluster
22:34 primehaxor joined #gluster
22:35 mhulsman joined #gluster
22:36 mhulsman joined #gluster
22:36 mhulsman joined #gluster
22:37 arpu joined #gluster
22:37 mhulsman joined #gluster
22:38 mhulsman joined #gluster
22:47 pulli joined #gluster
22:54 Philambdo joined #gluster
23:01 Caveat4U_ joined #gluster
23:13 pulli joined #gluster
23:16 bbooth joined #gluster
23:18 lucasrolff joined #gluster
23:27 derjohn_mob joined #gluster
23:34 Pupeno joined #gluster
23:40 Marbug_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary