Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 chen left #gluster
00:17 B21956 joined #gluster
00:23 side_control joined #gluster
00:26 plarsen joined #gluster
01:04 ryao_ joined #gluster
01:13 MugginsM joined #gluster
01:35 tessier joined #gluster
01:36 nangthang joined #gluster
01:36 thangnn_ joined #gluster
01:55 harish joined #gluster
01:57 harish joined #gluster
02:18 daMaestro joined #gluster
02:24 Gill joined #gluster
02:26 bharata-rao joined #gluster
02:34 dgandhi joined #gluster
02:58 bala joined #gluster
03:06 nrcpts joined #gluster
03:12 johnnytran joined #gluster
03:12 nangthang joined #gluster
03:49 jobewan joined #gluster
03:54 codad joined #gluster
03:54 ppai joined #gluster
03:56 itisravi joined #gluster
03:57 shubhendu joined #gluster
04:10 atinmu joined #gluster
04:18 kshlm joined #gluster
04:21 fandi_ joined #gluster
04:24 nishanth joined #gluster
04:35 anoopcs joined #gluster
04:35 lalatenduM joined #gluster
04:49 ira joined #gluster
04:49 anoopcs joined #gluster
04:50 ndarshan joined #gluster
04:51 sakshi joined #gluster
04:52 gem joined #gluster
04:55 jiffin joined #gluster
05:03 suman_d joined #gluster
05:10 rafi joined #gluster
05:13 daMaestro joined #gluster
05:21 kumar joined #gluster
05:35 hagarth joined #gluster
05:44 meghanam joined #gluster
05:53 ndarshan joined #gluster
06:07 karnan joined #gluster
06:09 ndarshan joined #gluster
06:15 ramteid joined #gluster
06:30 soumya_ joined #gluster
06:32 saurabh joined #gluster
06:32 raghu joined #gluster
06:36 jobewan joined #gluster
06:37 calisto joined #gluster
06:42 atalur joined #gluster
06:44 nangthang joined #gluster
06:48 jobewan joined #gluster
06:49 anil joined #gluster
07:03 ctria joined #gluster
07:07 overclk joined #gluster
07:09 aravindavk joined #gluster
07:17 jtux joined #gluster
07:29 jkroon joined #gluster
07:30 jkroon hi all, is it possible to reshape a glusterfs layout on the fly?
07:30 jkroon in particular I'd like to alter a 1x2x2 distribute/stripe/replicate to a 2x1x2 distribute/stripe/replicate (In short, I want to disable striping)
07:42 nrcpts joined #gluster
07:44 ricky-ticky1 joined #gluster
07:45 aulait joined #gluster
07:46 Philambdo joined #gluster
07:48 RobertLaptop joined #gluster
07:53 rgustafs joined #gluster
07:56 Manikandan joined #gluster
08:10 getup joined #gluster
08:16 soumya joined #gluster
08:25 purpleidea joined #gluster
08:27 kdhananjay joined #gluster
08:27 fsimonce joined #gluster
08:27 rjoseph joined #gluster
08:30 deepakcs joined #gluster
08:35 bala joined #gluster
08:39 T0aD joined #gluster
08:41 anoopcs joined #gluster
08:45 geaaru joined #gluster
08:47 geaaru hi, is there a way to set a brick to a broken status to avoid use of a disk ? In this way I can gain a time before replace-brick ?
08:47 geaaru thanks in advance
08:47 deniszh joined #gluster
08:52 deniszh joined #gluster
08:58 deniszh joined #gluster
09:04 tg2 joined #gluster
09:06 soumya joined #gluster
09:09 Bardack joined #gluster
09:17 karnan joined #gluster
09:18 shubhendu joined #gluster
09:20 ndarshan joined #gluster
09:20 rjoseph joined #gluster
09:28 itisravi joined #gluster
09:30 ghenry joined #gluster
09:30 ghenry joined #gluster
09:32 ricky-ticky joined #gluster
09:33 Slashman joined #gluster
09:34 elico joined #gluster
09:38 glusterbot News from newglusterbugs: [Bug 1182932] Do not have tmpfiles snippet for /var/run/gluster <https://bugzilla.redhat.co​m/show_bug.cgi?id=1182932>
09:38 geaaru hi, how can i stop a geo-replication session in faulty status ? thanks in advance
09:42 malevolent joined #gluster
09:43 karnan joined #gluster
09:47 partner geaaru: well, you can kill the brick process to make it unavailable
09:47 partner that way if its partially functional it won't be serving files anymore
09:48 partner it seems people try _really_ much rely on replace-brick operation, probably as there's loads of instructions out there and even the official docs keep talking about it lenghty
09:59 ndarshan joined #gluster
10:00 deepakcs joined #gluster
10:02 anoopcs joined #gluster
10:02 rjoseph joined #gluster
10:02 polychrise joined #gluster
10:05 shubhendu joined #gluster
10:24 asku joined #gluster
10:26 ntt joined #gluster
10:27 ntt Hi. i've installed glusterfs 3.6 (client). Can i mount a volume from a glusterfs server 3.4 (other clients mounting same volume with 3.4 clients) ?
10:31 m0ellemeister joined #gluster
10:33 partner ntt: sure you can
10:37 vimal joined #gluster
10:39 meghanam joined #gluster
10:39 anoopcs joined #gluster
10:40 rjoseph joined #gluster
10:43 rjoseph joined #gluster
10:43 sharknardo joined #gluster
10:45 ntt partner: sorry.... it is a production environment and i don't want to make damages.....
10:46 ntt some other clients using nfs but i'm planning a migration to the native glusterfs client
10:47 nishanth joined #gluster
10:53 ntt partner: i have a problem when i try to mount a volume with the glusterfs client: "E [xlator.c:406:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again"
10:53 ntt should i load fuse module?
11:02 misko ntt: try export LC_ALL=C; mount -t glusterfs ...
11:12 ndevos partner: I'm now booking my trip for next weeks meetup :)
11:16 meghanam joined #gluster
11:18 g0dz1ll4 joined #gluster
11:20 m0ellemeister joined #gluster
11:31 nishanth joined #gluster
11:35 fandi_ hi all,
11:36 fandi_ how to tunning gluster if i have 13k file , it take almost 30 second for ls .. and php cannot create that file
11:36 elico joined #gluster
11:41 misko run strace on ls and see where it sleeps
11:44 bala joined #gluster
11:46 nshaikh joined #gluster
11:46 fandi joined #gluster
11:48 Arminder joined #gluster
11:52 Arminder- joined #gluster
11:57 Arminder joined #gluster
11:58 ntt misko
11:59 Arminder- joined #gluster
12:03 abyss^ joined #gluster
12:06 ntt misko: found the problem: "DNS resolution failed on host server1". But i would make a question: my storage server has hostname = storage1 and in /etc/hosts i have 172.16.1.1 storage1 and 10.100.0.1 server1. 10.100.0.X is my replication dedicated network. Why if i try to mount from a client i need to set 172.16.1.1 = server1 (not storage1!!) in my /etc/hosts??? If i use storage1 (172.16.1.1) i have a dns resolution error....
12:06 partner ndevos: nice, we shall meet there then :)
12:06 partner i was asked to tell something about our usage but i haven't got a bit of time to write anything (nor promised to do so either) :/
12:08 partner been very busy with migrating to another datacenter but happy to tell all our storage was moved without issues, sortiment of distributed and replicated volumes, only some weird issue was encountered with gluster bricks coming back up and then uwsgi failing on serving data from there in most strange way
12:08 partner schedule did not allow any online migrations so had to enforce the infra to withstand the absence of any bricks
12:10 ndevos partner: it would be great if you could present maybe 10-15 minutes about how you use gluster? like, number of servers/bricks/volumes and how the data is used (and growing?)
12:11 _Bryan_ joined #gluster
12:11 ndevos partner: something about the moving would be insteresting too, but mainly for me, and probably less for people attending the meetup
12:12 partner i know.. i just haven't got any time and i'm not exactly a frequent speaker at any events :/
12:12 partner but hey, i do have powerpoint !!1!! :D
12:14 bala joined #gluster
12:15 ndevos partner: I'm happy to talk about your deployment too, but well, I dont know anything about it :)
12:15 ndevos partner: we can also do a talk together if you would prefer that
12:18 ira joined #gluster
12:19 partner its nothing fancy really
12:19 partner some 16 or so servers with roughly 700 TB usable diskspace plus couple of independent setups serving Mesos and OpenStack
12:20 partner lets see if i can draft something up, 10 minutes isn't that long really to discuss things in lenght
12:21 ndevos the "fancy" really depends on what you're used to :) I think its still quite an impressive setup
12:22 ndevos well, the slot I got is 30-45 minutes, I guess you could have something like that too, just depends on what you would like to explain
12:23 partner no pressure... ;D
12:24 ndevos hehe, no, and I think 15 minutes for a use-case is pretty nice :)
12:24 ppai joined #gluster
12:25 partner kind of spinoff for aftertalks
12:25 Arminder joined #gluster
12:25 ndevos just let Toni know if you can present something, and how long you would like to talk - I'm sure he can accomidate you
12:26 partner yeah, he has asked us if we could talk anything about our ceph or gluster setups but as the ceph is a bit too young on our use there's not much to say, gluster has been around for hmm no idea, 2 years i guess?
12:27 kryl joined #gluster
12:27 kryl hi
12:27 glusterbot kryl: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:27 ndevos well, that sounds good - and I know that there is one use-case for Ceph too, so if you can present Gluster that would be awesome
12:27 kryl where are stored vols ID and what I'm supposed to backup if I want to restore vols to an other system ?
12:28 ndevos kryl: its an extended attribute that is set on the root-directory of the brick
12:28 partner kryl: see /var/lib/glusterfs for all the bits and details
12:28 partner hmm should read the whole line before answering :o
12:28 ndevos @xattrs
12:28 kryl I onlyu have glusterd in this place !
12:28 ndevos @extended attributes
12:28 glusterbot ndevos: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/
12:28 kryl /var/lib/glusterd
12:29 ndevos kryl: "getfattr -m .  -d -e hex /path/to/brick" as root should show a volume-id xattr
12:29 partner hmm, i guess its the same, mine is glusterfs but if it contains info about the peers, bricks and what not its the right place (for configuration data)
12:30 kryl the primary server is totally down I use to connect the HD to an other to get data and restore them .
12:30 ndevos @replace server
12:30 glusterbot ndevos: Error: 'server' is not a valid topic number.
12:30 ndevos @replace
12:30 glusterbot ndevos: (replace [<channel>] <number> <topic>) -- Replaces topic <number> with <topic>.
12:31 ndevos ah... ,,(replace server)
12:31 glusterbot ndevos: Error: No factoid matches that key.
12:31 * ndevos gives up and looks for the wiki page
12:31 kryl ok I have to restore logs to understand what's going wrong
12:31 ndevos kryl: maybe http://www.gluster.org/community/docum​entation/index.php/Gluster_3.4:_Brick_​Restoration_-_Replace_Crashed_Server ?
12:31 kryl I cp the /var/lib/glusterd in place and /etc/glusterfs and it doesn't start
12:35 bala joined #gluster
12:36 ndevos kryl: "it doesn't start" does not really explain a lot - did you check the logs?
12:37 hagarth joined #gluster
12:37 kryl 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device
12:38 edwardm61 joined #gluster
12:38 RaSTar kryl, is that a rdma volume
12:38 RaSTar ?
12:39 kryl which website do you like to pastebin ?
12:41 ndevos @paste
12:41 glusterbot ndevos: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
12:41 kryl http://sprunge.us/LPLh
12:42 ndevos that suggets something in /var/lib/glusterd is not correct...
12:43 partner hmm, looks familiar from the fresh past.. now what was it..
12:44 kryl this logs are not sufficient for me to find a solution :-(
12:45 partner just a remote guess, could this be version issue? were the original boxes upgraded and now as one crashed its a fresh install and versions don't match?
12:45 nishanth joined #gluster
12:45 partner as in operating-version ?
12:46 partner hmm can't be if the data was copied back..
12:48 partner just had issues with that and glusterd started once but no longer restarted
12:52 kryl glusterd --debug results : http://sprunge.us/PMCd
12:53 kryl how to know about the version of the old system ?
12:53 kryl is there a place where I can read that ?
12:53 kryl because I can't use the old server it's just copied data.
12:55 DV joined #gluster
12:55 kryl well ...
12:55 kryl it's too long
12:55 kryl I'll try to restart everything from beginning :-(
12:55 kryl that suck
12:55 kryl lol
12:58 partner on your case it should be /var/lib/glusterd/glusterd.info
13:01 kryl mmm ?
13:01 kryl the UUID in place doesn't fit anything
13:03 kryl what this UUID is supposed to match ?
13:09 glusterbot News from newglusterbugs: [Bug 1183018] Do not have tmpfiles snippet for /var/run/gluster <https://bugzilla.redhat.co​m/show_bug.cgi?id=1183018>
13:10 kryl if I want to fix this UUID ? how to generate a new one ?
13:14 harish joined #gluster
13:14 partner hmm i think its that file i mentioned. of course then you will need to also detach the old uuid from the peer and peer probe the new host again but if you are trying to replace the broken server that is not probably not the best way of doing it
13:14 partner there was a link above to guide you through replacing a broken server
13:15 partner stating for example things such as: echo UUID={uuid from previous step}>/var/lib/glusterd/glusterd.info
13:15 kryl let me precise something
13:15 kryl there is no peer at all
13:15 kryl it's just the master server
13:15 kryl forget all the rest :)
13:15 partner oh
13:15 partner i missed that one
13:16 kryl I don't understand why I can't start it up
13:16 kryl maybe it's a version pb
13:16 kryl but I don't know what was the old version
13:16 partner gluster version?
13:16 kryl yes
13:16 partner see what's inside the glusterd.info file?
13:17 partner from the backup you had
13:17 partner operating-version=?
13:17 kryl in the old glusterd.info  I have : operating-version=30501
13:17 lpabon joined #gluster
13:17 kryl and the actual version is : glusterfs 3.5.1 built on Jun 28 2014 04:14:47
13:17 kryl I guess it's the same
13:18 partner looks good to me
13:18 kryl UUID is arbitrary so I don't understand why it would be a problem
13:20 partner hmm yeah
13:21 partner to my understanding its for the host and used for peer related activities
13:21 partner ups
13:22 partner is there any selinux or such involved on the new box?
13:23 B21956 joined #gluster
13:23 kryl nop
13:23 kryl it's exactly the same configuration
13:23 kryl ok I regenerate a new glusterd.info and it's the same problem
13:24 kryl I think it's something else
13:25 DV joined #gluster
13:27 bene2 joined #gluster
13:28 kryl ok it was just an IP problem ...
13:28 kryl ah ah
13:36 elico joined #gluster
13:43 partner the volumes/bricks do use dns names (or ip addresses, don't use them)
13:47 ildefonso joined #gluster
13:48 LebedevRI joined #gluster
13:49 shubhendu joined #gluster
13:49 athinkingmeat joined #gluster
13:49 partner but sounds like you got it fixed so great
13:51 rgustafs joined #gluster
13:57 jmarley joined #gluster
13:59 fandi joined #gluster
14:04 lalatenduM joined #gluster
14:07 fandi joined #gluster
14:09 glusterbot News from newglusterbugs: [Bug 1178031] [SNAPSHOT]: fails to create on thin LV with LUKS layer in between <https://bugzilla.redhat.co​m/show_bug.cgi?id=1178031>
14:12 bennyturns joined #gluster
14:15 virusuy joined #gluster
14:15 virusuy joined #gluster
14:15 fandi joined #gluster
14:18 kdhananjay joined #gluster
14:18 theron joined #gluster
14:19 kdhananjay left #gluster
14:31 churnd joined #gluster
14:33 tdasilva joined #gluster
14:35 shaunm joined #gluster
14:37 meghanam joined #gluster
14:42 misko is this all right?
14:42 misko [root@xfc4 config]# ls -li qemu/test2*
14:42 misko 9641876990333391523 -rw-------. 1 root root 1831 16. led 09.26 qemu/test2.xml
14:42 misko 9641876990333391523 -rw-------. 1 root root 1831 16. led 09.26 qemu/test2.xml
14:42 glusterbot misko: -rw-----'s karma is now -3
14:42 glusterbot misko: -rw-----'s karma is now -4
14:42 misko glusterbot: thanks for the useful information
14:48 Gill joined #gluster
14:49 coredump joined #gluster
14:50 sauce joined #gluster
14:54 squizzi joined #gluster
15:01 theron joined #gluster
15:02 dgandhi joined #gluster
15:03 dgandhi joined #gluster
15:06 dgandhi joined #gluster
15:06 dgandhi joined #gluster
15:08 dgandhi joined #gluster
15:09 wushudoin joined #gluster
15:09 dgandhi joined #gluster
15:10 _Bryan_ joined #gluster
15:10 dgandhi joined #gluster
15:11 meghanam joined #gluster
15:11 plarsen joined #gluster
15:11 dgandhi joined #gluster
15:11 mator misko, find the file on bricks and compare its attributes
15:13 dgandhi joined #gluster
15:15 dgandhi joined #gluster
15:15 misko mator: the situation disappeared automagically...
15:17 dgandhi joined #gluster
15:17 bene2 joined #gluster
15:19 dgandhi joined #gluster
15:20 dgandhi joined #gluster
15:21 dgandhi joined #gluster
15:21 dgandhi joined #gluster
15:22 dgandhi joined #gluster
15:23 dgandhi joined #gluster
15:25 dgandhi joined #gluster
15:27 kryl partner, thank you for helping ;-)
15:27 dgandhi joined #gluster
15:28 dgandhi joined #gluster
15:29 dgandhi joined #gluster
15:29 doubt joined #gluster
15:31 dgandhi joined #gluster
15:31 dgandhi joined #gluster
15:32 fandi joined #gluster
15:32 dgandhi joined #gluster
15:33 dgandhi joined #gluster
15:34 dgandhi joined #gluster
15:36 fandi joined #gluster
15:36 dgandhi joined #gluster
15:37 dgandhi joined #gluster
15:39 dgandhi joined #gluster
15:39 glusterbot News from newglusterbugs: [Bug 1183054] rpmlint throws couple of errors for RPM spec file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1183054>
15:40 fandi joined #gluster
15:41 nishanth joined #gluster
15:42 dgandhi joined #gluster
15:43 Psi|4ward joined #gluster
15:43 dgandhi joined #gluster
15:43 Psi|4ward Hi there, whats the better option - one brick on RAID1 or 2 bricks without raid?
15:44 fandi joined #gluster
15:44 dgandhi joined #gluster
15:45 wkf joined #gluster
15:46 dgandhi joined #gluster
15:47 dgandhi joined #gluster
15:48 wkf hey folks. I've got a problem for which I think gluster is a good fit. I'm going to deploy it on AWS today, in a relatively small cluster, spread over 3 AZs. Does anyone have any recommendations on instance type/volume type, or any pointers to documentation. I'd really appreciate any advice. Thanks!
15:52 jobewan joined #gluster
15:53 lmickh joined #gluster
15:54 ildefonso joined #gluster
16:00 mator psi raid vs non-raid ? what the point ?
16:01 Psi|4ward store the brick1 on /dev/sdaX and /dev/sdbX or on /dev/mdX ?
16:01 mator psi do you ever need raid ?
16:01 mator (as like some sort of data protection)
16:01 Psi|4ward yes of course but often had problems with the rebuild process
16:02 mator well, gluster docs suggest bricks to be raid luns, because raw hdds doesn't provide data protection
16:03 Psi|4ward i found nothing in the docs but probably glusters healing is better than raid rebulding
16:03 mator for example, my servers has sda and sdb as luns from external hardware raid array
16:04 mator gluster healing != raid rebuilding
16:04 rwheeler joined #gluster
16:04 Psi|4ward what? i thought exacly this is the point here?
16:04 mator psi, chair != table
16:05 mator where did you read about healing is equals raid rebilding ?
16:05 Psi|4ward brick-replace && heal in any howto
16:06 mator well, healing works only for replicated volumes, and in case of distributed volume, what would you be doing?
16:07 mator besides, gluster healing != raid (data protection)
16:07 Psi|4ward of course we talking about a replication set of >=2
16:07 mator you never mentioned replicated volume before
16:07 Psi|4ward but raid 1 - sorry for the miss
16:08 mator raid have sense for 1 server
16:08 mator replicated volume is N servers
16:08 mator so raid is doing your physical data protection on 1 server
16:08 Psi|4ward ive 3 gluster Nodes, each with 2 disks
16:09 mator gluster is doing data replication over N servers
16:09 mator so, what you volume would look like?
16:09 Iodun joined #gluster
16:10 Psi|4ward im with you but i could store the brick on the (software) RAID1 device or directly on the disks
16:10 Psi|4ward you mean what data is there? many small files for webapps and email-server
16:10 mator yes, but how many bricks would have each server?
16:11 Psi|4ward 2 on each if i dont use the RAID
16:11 Psi|4ward 1 on each if i combine the disks
16:13 mator i would do raid1 and export it as 1 bricks from each the servers
16:14 Psi|4ward but you could not tell my why?
16:14 mator if you had more hdds on servers, gluster howtos suggest raid5/raid6 on disks and export as 1 brick (depends on number of disks) as well
16:14 mator because disks are used to fail
16:16 Psi|4ward thank you!
16:18 daMaestro joined #gluster
16:21 Philambdo joined #gluster
16:23 jobewan joined #gluster
16:24 kkeithley partner: 3.6.2beta2 .debs for wheezy are available on download.gluster.org.
16:30 hchiramm joined #gluster
16:38 dgandhi joined #gluster
16:41 bene2 joined #gluster
16:46 bennyturns joined #gluster
17:03 and` the log file for a particular mount point is being populated by several entries that do report a specific resource is missing on subvolume volumename-replicate0 (http://fpaste.org/170589/21427704/raw/), 'gluster volume heal info' looks OK, anyone has got a suggestion about what's going on?
17:03 and` load on the machine is slowly increasing too
17:05 calisto joined #gluster
17:05 and` it's also not clear why the mentioned resourse is malformed as in http://fpaste.org/170594/27900142/raw/
17:21 sputnik13 joined #gluster
17:23 sputnik13 joined #gluster
17:29 n-st joined #gluster
17:35 Arminder- joined #gluster
17:40 Arminder joined #gluster
17:48 rcampbel3 joined #gluster
18:07 jackdpeterson joined #gluster
18:09 jiffin joined #gluster
18:10 glusterbot News from newglusterbugs: [Bug 1183118] Gluster/NFS does not exit cleanly on reboot, leaving rpcbind registrations behind <https://bugzilla.redhat.co​m/show_bug.cgi?id=1183118>
18:18 rjoseph joined #gluster
18:33 swebb joined #gluster
18:42 TvL2386 joined #gluster
18:46 luckyluke joined #gluster
18:50 luckyluke hey guys, can someone help a newbie with a simple gluster task? ;)
18:57 TvL2386 joined #gluster
18:58 luckyluke left #gluster
19:03 sputnik13 joined #gluster
19:04 Philambdo joined #gluster
19:05 bene2 joined #gluster
19:16 fandi joined #gluster
19:22 squizzi joined #gluster
19:27 Arminder joined #gluster
19:30 Arminder joined #gluster
19:32 squizzi joined #gluster
19:34 Arminder joined #gluster
19:37 squizzi joined #gluster
19:55 Arminder- joined #gluster
20:07 zerick joined #gluster
20:15 lpabon joined #gluster
20:23 diegows joined #gluster
20:29 Arminder joined #gluster
20:32 Arminder joined #gluster
20:44 JustinClift joined #gluster
20:44 kalzz joined #gluster
20:44 bennyturns joined #gluster
20:45 dgandhi joined #gluster
20:58 ec2-user_ joined #gluster
21:00 wkf_ joined #gluster
21:02 sputnik13 joined #gluster
21:04 systemonkey joined #gluster
21:10 owlbot joined #gluster
21:22 sputnik13 joined #gluster
21:33 rolfb joined #gluster
21:35 sputnik13 joined #gluster
21:43 badone joined #gluster
21:44 fandi joined #gluster
21:50 wkf joined #gluster
21:53 rcampbel3 joined #gluster
21:54 B21956 left #gluster
21:58 dgandhi joined #gluster
22:10 sputnik13 joined #gluster
22:16 chirino joined #gluster
22:55 swebb joined #gluster
23:00 calum_ joined #gluster
23:10 jiffin joined #gluster
23:37 DarkBidou joined #gluster
23:38 DarkBidou hi
23:38 glusterbot DarkBidou: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary