Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 doekia joined #gluster
00:17 topshare joined #gluster
00:26 plarsen joined #gluster
00:29 sputnik13 joined #gluster
00:41 nickmoeck left #gluster
00:50 gildub joined #gluster
00:57 coredump joined #gluster
00:58 bmikhael joined #gluster
01:12 bala joined #gluster
01:16 saltsa joined #gluster
01:28 vimal joined #gluster
01:45 zerick joined #gluster
01:50 recidive joined #gluster
02:06 sputnik13 joined #gluster
02:16 _Bryan_ joined #gluster
02:25 topshare joined #gluster
02:25 7GHAAR9I5 joined #gluster
02:29 kdhananjay joined #gluster
02:41 haomaiwa_ joined #gluster
02:53 dusmant joined #gluster
02:54 topshare joined #gluster
03:22 topshare joined #gluster
03:34 rejy joined #gluster
03:43 shubhendu joined #gluster
03:49 itisravi joined #gluster
03:50 itisravi_ joined #gluster
03:53 saurabh joined #gluster
03:55 nbalachandran joined #gluster
03:56 kshlm joined #gluster
03:56 RameshN joined #gluster
03:59 xrandr how can I reconnect a gluster peer?
04:01 xleo joined #gluster
04:05 spandit joined #gluster
04:08 topshare joined #gluster
04:10 bmikhael joined #gluster
04:12 sputnik13 joined #gluster
04:12 kanagaraj joined #gluster
04:26 recidive joined #gluster
04:30 atinmu joined #gluster
04:30 dusmant joined #gluster
04:44 ramteid joined #gluster
04:46 anoopcs joined #gluster
04:46 Rafi_kc joined #gluster
04:46 rafi1 joined #gluster
04:47 hagarth joined #gluster
04:54 karnan joined #gluster
04:57 jobewan joined #gluster
04:59 bala joined #gluster
04:59 jiffin joined #gluster
05:01 lalatenduM joined #gluster
05:16 nshaikh joined #gluster
05:27 topshare joined #gluster
05:28 nishanth joined #gluster
05:32 RameshN_ joined #gluster
05:32 sputnik13 joined #gluster
05:33 ws2k33 joined #gluster
05:41 dusmant joined #gluster
05:50 ppai joined #gluster
05:52 prasanth_ joined #gluster
05:56 Jay3 joined #gluster
05:56 Jay3 hey - i'm getting a problem where some of my peers wont connect to each other
05:56 Jay3 i detached all of them and tried to reattach
05:57 Jay3 this is the 3.5.2 build
05:59 bmikhael joined #gluster
06:01 hchiramm Jay3, what error u r getting
06:02 prasanth_ joined #gluster
06:04 Jay3 so i think the install to these servers got botched somehow - i'm going to try to purge and reinstall real fast - but i get an error staging volume on specific servers in this group
06:04 Jay3 there are 30 servers
06:04 Jay3 one sec and ill see if it comes back
06:05 hchiramm sure..
06:05 kumar joined #gluster
06:06 emi1 joined #gluster
06:15 Jay3 so i script the peer probe, and i get a success from all of them - when i do peer status on all of them, some of them show peers disconnected
06:16 Jay3 it looks like its just taking a while
06:17 hchiramm yeah, it could be a delay to report ..
06:17 Jay3 ok all are connected - i'm going to create the volume and see if i get the same message as before
06:17 hchiramm k.. :)
06:17 partner joined #gluster
06:20 Jay3 maybe it was a purge thing - i had an existing config on those disconnected peers to a volume that no longer existed… the volume create seems to be working
06:20 Jay3 let me try to start the volume
06:21 Jay3 ya no… still have commit fails
06:24 Jay3 i dont know why - its happening on about 19 nodes in this 30 node cluster
06:25 Jay3 in the brick log on one of the failed hosts "failed to submit rpc-request (XID: 0x2 Program: Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport (glusterfs)"
06:28 rjoseph joined #gluster
06:46 hchiramm Jay3, was afk.. reading through
06:46 fabio joined #gluster
06:47 Jay3 no problem - i thought it might have been rpc insecure needed
06:47 Jay3 so added it - and restarted gluster but now i have a few servers that didnt restart
06:47 Jay3 Initialization of volume 'management' failed
06:50 Jay3 [glusterd-store.c:2222:glusterd_store_retrieve_volumes] 0-: Unable to restore volume: vol0
06:50 glusterbot New news from resolvedglusterbugs: [Bug 1078061] Need ability to heal mismatching user extended attributes without any changelogs <https://bugzilla.redhat.com/show_bug.cgi?id=1078061>
06:51 atinmu Jay3, can u paste gluster volume info output?
06:51 hchiramm Jay3, u have a glusterd dev here :)
06:51 hchiramm atinmu++ thanks
06:51 glusterbot hchiramm: atinmu's karma is now 1
06:52 Jay3 its a lot of output - 120 bricks
06:52 hchiramm Jay3, fpaste.org
06:52 atinmu what is the volume status showing, is it started?
06:52 hchiramm Jay3, paste it in fpaste and pass the url
06:53 Jay3 http://ur1.ca/i2j4m
06:53 glusterbot Title: #129193 Fedora Project Pastebin (at ur1.ca)
06:53 Jay3 volume is not started
06:54 hchiramm pastebin states it is started ? Status: Started
06:54 atinmu Jay3, its started
06:55 Jay3 hmm
06:55 Jay3 it failed on the start - commit failed
06:57 atinmu Jay3, here is what might have happened, when volume start was triggered, one of the glusterd node got the trigger but in between the transaction the node went down, so the rpc reply was negative to the originator and thats why it showed volume start failed but local commit succeeds and that's why you see volume as started
06:57 Jay3 this is the current pool list - those downed hosts wont start gluster now http://ur1.ca/i2j5p
06:57 glusterbot Title: #129194 Fedora Project Pastebin (at ur1.ca)
07:01 Jay3 so on the hosts that wont start service - failed to initialize vol0
07:02 Jay3 i am getting [2014-08-28 06:49:36.499501] W [rdma.c:4194:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device)
07:02 Jay3 [2014-08-28 06:49:36.499514] E [rdma.c:4482:init] 0-rdma.management: Failed to initialize IB Device
07:02 Jay3 and beyond that - the service fails
07:02 hchiramm Jay3, that rdma error looks to be irrelevant here
07:02 Jay3 ok
07:02 hchiramm atinmu, is nt it ?
07:03 Jay3 moving further in this log
07:03 atinmu hchiramm, yes
07:03 Jay3 [2014-08-28 06:49:36.499520] E [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
07:03 Jay3 [2014-08-28 06:49:36.499552] W [rpcsvc.c:1535:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed
07:03 rjoseph1 joined #gluster
07:03 Jay3 and then
07:03 Jay3 [2014-08-28 06:49:38.962016] E [glusterd-store.c:2222:glusterd_store_retrieve_volumes] 0-: Unable to restore volume: vol0
07:03 Jay3 [2014-08-28 06:49:38.962055] E [xlator.c:403:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
07:03 Jay3 [2014-08-28 06:49:38.962071] E [graph.c:307:glusterfs_graph_init] 0-management: initializing translator failed
07:03 Jay3 [2014-08-28 06:49:38.962081] E [graph.c:502:glusterfs_graph_activate] 0-graph: init failed
07:05 ricky-ti1 joined #gluster
07:06 atinm joined #gluster
07:10 stickyboy [2014-08-28 05:46:01.104314] W [dht-diskusage.c:232:dht_is_subvol_filled] 0-homes-dht: disk space on subvolume 'homes-replicate-0' is getting full (93.00 %), consider adding more nodes
07:11 stickyboy w00t?
07:23 topshare joined #gluster
07:24 hagarth joined #gluster
07:26 stickyboy joined #gluster
07:26 fsimonce joined #gluster
07:33 Jay3 sso how do i fix those hosts
07:35 topshare joined #gluster
07:35 dusmant joined #gluster
07:37 nishanth joined #gluster
07:39 RameshN_ joined #gluster
07:39 RameshN joined #gluster
07:45 cristov_mac joined #gluster
07:45 anands joined #gluster
07:46 andreask joined #gluster
07:48 atinm joined #gluster
07:49 rjoseph joined #gluster
07:56 topshare joined #gluster
07:57 harish joined #gluster
08:06 hagarth joined #gluster
08:10 andreask joined #gluster
08:18 hchiramm Jay3, checking .. will be back soon
08:20 RameshN joined #gluster
08:20 RameshN_ joined #gluster
08:22 RameshN_ joined #gluster
08:24 RameshN joined #gluster
08:34 loomsen joined #gluster
08:35 loomsen hi folks, short question, if i have a sqlite db file, say 1G, and something changes, does gluster retransmit the whole file if I have a replicated setup? or does it deal with deltas consistently so i don't have to worry about possibly breaking the sqlite db?
08:37 ndevos loomsen: the replication is done client-side, each write/modification is sent to all the bricks that store the file
08:38 loomsen ndevos: ah right, so I should be fine then :) thank you very much
08:39 ndevos loomsen: just make sure that your db applications use locking, I think sqlite uses some non-shared locking (one host only) by default
08:40 loomsen ndevos: hmm, that might become a problem. I thought about sharing a php sessions file between the nodes. thank you for the hint
08:41 ndevos loomsen: I dont really know how sqlite works, you can probably find more info on use-cases for sqlite on NFS or other network filesystems
08:43 ndevos loomsen: for example, I've come across https://www.sqlite.org/wal.html before, so at least that is something to watch out for (or tune, or whatever)
08:43 glusterbot Title: Write-Ahead Logging (at www.sqlite.org)
08:47 loomsen ndevos: hmm, i used memcached in the past, but it didn't work very reliable. this sound like too much of a hassle to implement safely, i might just implement a mysql backend and store it in our cluster then
08:48 ndevos loomsen: I'm no sqlite user, but if I would have a db that is about 1GB in size, I probably would move it to mysql/postgres or so :)
08:49 loomsen ndevos: yeah, makes more sense i guess :) thank you for the heads up
08:49 ndevos loomsen: you're welcome!
08:54 Zdez joined #gluster
09:10 Slashman joined #gluster
09:10 ranjan joined #gluster
09:11 ranjan Hi All, I have created a distributed volume using 4 hosts, and each host brick is 500 GB, but when i mount the volume its showing the size of only 1 brick.
09:11 ranjan should it be not showing 500x4 around 2TB?
09:15 shubhendu joined #gluster
09:16 lalatenduM ranjan, yes it should show  500x4 , can u plz pastebin the gluster v info <volname>
09:17 ranjan lalatenduM, http://pastebin.com/k9Kz4Vkn
09:17 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
09:17 harish joined #gluster
09:17 vimal joined #gluster
09:18 lalatenduM ranjan, the output shows you are actually using 3 bricks , not 4
09:18 ranjan left #gluster
09:18 ranjan joined #gluster
09:18 lalatenduM ranjan, the output shows you are actually using 3 bricks , not 4
09:19 ranjan lalatenduM, yes, initially it was 4, as a test now recreated it using 3
09:19 lalatenduM ranjan, ok.. dos the mount still shows 500GB?
09:19 lalatenduM s/dos/does/
09:19 glusterbot What lalatenduM meant to say was: ranjan, ok.. does the mount still shows 500GB?
09:19 ranjan lalatenduM, yes
09:20 lalatenduM ranjan, just for general knowledge , which version of Gluster you are using?
09:20 ranjan lalatenduM, http://fpaste.org/129214/40921762/
09:20 glusterbot Title: #129214 Fedora Project Pastebin (at fpaste.org)
09:21 ranjan lalatenduM, 3.5.2
09:21 lalatenduM ranjan, this mount you have done after the volume creation , right?
09:21 rjoseph joined #gluster
09:21 lalatenduM I mean the latest vol creation
09:21 ranjan lalatenduM, yes
09:22 hagarth joined #gluster
09:22 lalatenduM ranjan, can again try the mount after restarting the volume
09:22 ranjan lalatenduM, let me check that
09:23 lalatenduM ranjan, unmount the vol from client; restart the vol;mount the vol on client
09:23 lalatenduM ranjan, I am not sure why it is not showing 3Xeach brick, it should just work
09:24 atinm ndevos, ping
09:24 ranjan lalatenduM, do you feel that i am doing some mistake, as i am new to gluster
09:24 glusterbot atinm: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
09:24 atinm ndevos, ping
09:24 glusterbot atinm: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
09:24 lalatenduM glusterbot++
09:24 lalatenduM atinm, :)
09:24 getup- joined #gluster
09:24 glusterbot lalatenduM: glusterbot's karma is now 7
09:26 lalatenduM ranjan, can plz pastebin all the commands you have used to create the volume and mount the volume?
09:26 ranjan lalatenduM, i will give you the steps i used.
09:26 getup- hi, is gluster aware of available disk space on its peers? e.g. when we have a system of 2 and add 2, rebalancing them will take a while which means the first 2 could potentially have less disk space available, is gluster smart enough to distribute data to the new 2 or could we end up in somewhat of a difficult situation?
09:27 ranjan lalatenduM, first i installed glusterfs-server package on all the 3 nodes
09:27 ranjan lalatenduM, then from on node i used the command gluster peer probe the other two
09:28 maxxx2014 joined #gluster
09:28 ranjan lalatenduM, then each node started showing Number of peers as 2
09:28 kdhananjay joined #gluster
09:29 lalatenduM getup-, it is intelligent to create new files on the new ones of the old ones are full, but that will impact performance , so re-balance is preferable
09:29 lalatenduM ranjan, sounds fine till now
09:29 ranjan lalatenduM, on each 3 servers i have 2 hdd, in which the 2nd hdd is dedicated for bricks
09:30 lalatenduM ranjan, on the 2nd hdd which filesystem you have?
09:30 ranjan lalatenduM, created the xfs filesystem on second hdd
09:31 lalatenduM ranjan, did you create the partition with inode size as 512
09:31 ranjan lalatenduM, yes
09:31 lalatenduM ok
09:31 ranjan lalatenduM, then mounted that partition to /storage
09:32 ranjan lalatenduM, inside that /storage i created two directories ie. /storage/nova and /storage/swift
09:32 getup- lalatenduM: that's ok, it's just that rebalancing a million+ files can take op a lot of time which could mean that the original set of nodes could potentially fill up, i'd like to avoid that
09:32 getup- take up*
09:32 lalatenduM ranjan, ok, plz send me the vol creation command you used
09:32 lalatenduM getup-, I understand
09:33 maxxx2014 I'm having some performance problems with gluster. I've googled a lot, but could not find resources to help me. Before I ask for your help on the problem, could you please point me to a URI that has information on gluster client performance analysis methods?
09:33 ranjan lalatenduM,  gluster volume create nova transport tcp node07.example.com:/storage/nova/ node06.example.com:/storage/nova/ node05.example.com:/storage/nova/
09:33 ranjan lalatenduM, this is the command i created for creating volume
09:34 lalatenduM ranjan, the command looks fine :)
09:34 ranjan lalatenduM, :( then whats the problem
09:34 getup- rebalancing 1 million files took nearly 17 hours, which is a bit long, unfortunately
09:34 ranjan lalatenduM, is df -h giving me a wrong value ?
09:35 lalatenduM ranjan, ok lets go back little bit, can you plz pastebin "gluster peer status" out put
09:35 lalatenduM ranjan, I dont think df command has any issue
09:36 lalatenduM getup-, yeah , you might want to give the feed back on gluster-devel mailing list that it should take less time, I think there were some discussion around that
09:37 ranjan lalatenduM, http://fpaste.org/129218/18622140/
09:37 glusterbot Title: #129218 Fedora Project Pastebin (at fpaste.org)
09:37 lalatenduM ranjan, also I was to see output of df -h on each node
09:37 ranjan lalatenduM, with gluster mounted right?
09:37 lalatenduM s/was/want/
09:38 glusterbot What lalatenduM meant to say was: ranjan, also I want to see output of df -h on each node
09:38 lalatenduM ranjan, nope, with brick partition mounted
09:38 lalatenduM I wan to to make sure that the partitions are fine wrt to size
09:39 lalatenduM maxxx2014, I only know abt https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-Administration_Guide-Performance_Enhancements.html
09:39 glusterbot Title: Chapter 11. Configuring Red Hat Storage for Enhancing Performance (at access.redhat.com)
09:39 getup- lalatenduM: there's a feature request published here http://www.gluster.org/community/documentation/index.php/Features/improve_rebalance_performance, it's probably up to the devs now to discuss when and how this can be implemented
09:40 lalatenduM maxxx2014, the concepts should hold true for gluster as well
09:41 ranjan lalatenduM, this is the output of df
09:41 lalatenduM getup-, but users feed back definitely helps to decide the priority :), if the bug already open , you can put ur comments
09:41 ranjan lalatenduM,  http://fpaste.org/129220/21887414/
09:41 glusterbot Title: #129220 Fedora Project Pastebin (at fpaste.org)
09:44 ranjan lalatenduM, oh god! i am doing something terribly wrong i believe
09:44 lalatenduM ranjan, what is it?
09:44 ranjan lalatenduM, the file created from one node is not shown in the other. :(
09:45 lalatenduM ranjan, is selinux is in enforcing mode? which distribution ur using?
09:45 ranjan lalatenduM, I am using CentOS 6.5 and selinux is enforcing
09:46 lalatenduM ranjan, hmm, can plz put selinx to permissive mode "setenforce 0" and check again i.e. umount;mount
09:46 ranjan lalatenduM, but gluster should work well with selinux right? any way i will try my luck
09:46 lalatenduM ranjan, selinux does not support gluster in EL,
09:47 lalatenduM ranjan, selinux support is started in fedora 20
09:47 ranjan lalatenduM, is it? so i should set it to permissive on all nodes right?
09:47 lalatenduM ranjan, yes
09:47 ranjan lalatenduM, one thing to note is there is no denials reported in audit log
09:48 maxxx2014 lalatendum: thanks, I'll check it out
09:48 lalatenduM maxxx2014, np
09:48 ranjan lalatenduM, should i restart services?
09:49 lalatenduM ranjan, lets unmount and remount if it does not work we can restart services and the vol
09:50 ranjan lalatenduM, still no luck
09:50 lalatenduM ranjan, ok stop the volume. restart glusterd, start volume
09:50 lalatenduM then mount the volume
09:51 Philambdo joined #gluster
09:52 lalatenduM ranjan, on all the nodes
09:52 lalatenduM ranjan, i meant mount the volume after you have restarted glusterd in all the ndoes
09:53 ranjan lalatenduM, done all those steps ! no luck :'(
09:54 lalatenduM ranjan, ok another one , are you sure iptable is causing any issue, is ok if we flush all the rules from iptable
09:54 lalatenduM s/causing/not causing/
09:54 glusterbot What lalatenduM meant to say was: ranjan, ok another one , are you sure iptable is not causing any issue, is ok if we flush all the rules from iptable
09:55 ranjan lalatenduM, these are openstack nodes which has many iptables rules preconfigured
09:55 ranjan lalatenduM, for testing i can flush, but do you think that will be a problem
09:55 ranjan lalatenduM, i have opened all ports as per the documentation
09:56 lalatenduM ranjan, that causes issues, not the one u r facing , I just wanted to make sure it is not because of this
09:56 lalatenduM ranjan, lets see the log files now
09:57 ranjan lalatenduM, which log you want to see?
09:57 lalatenduM ranjan, glusterd logs are at /var/log/glusterfs/
09:57 lalatenduM ranjan, lets check etc-gluster*.log in  /var/log/glusterfs/
09:58 ranjan lalatenduM, do you want recent logs or full logs
09:59 lalatenduM ranjan, check for any error , "grep '\] E \['
10:01 haomaiwa_ joined #gluster
10:01 ranjan lalatenduM, yes there are many errors shown
10:01 ranjan lalatenduM, but most of them i feel was part of trial and error
10:02 ranjan should i flush the logs and try again?
10:02 lalatenduM ranjan, ok , nope, pastebin the errors , log file does not impact the behavior
10:03 lalatenduM ranjan, I meant don't flush the logs :)
10:04 ranjan lalatenduM, but those error are at 9 AM
10:04 ranjan after that there are no errors
10:06 ranjan lalatenduM, http://fpaste.org/129226/14092203/
10:06 glusterbot Title: #129226 Fedora Project Pastebin (at fpaste.org)
10:07 haomai___ joined #gluster
10:09 lalatenduM ranjan, nothing imp I can see , I am worried now :(
10:10 ranjan lalatenduM, :(
10:11 lalatenduM brb
10:13 glusterbot New news from newglusterbugs: [Bug 1134822] core : setting a volume ready-only needs a brick restart to make the volume read-only <https://bugzilla.redhat.com/show_bug.cgi?id=1134822> || [Bug 1117822] Tracker bug for GlusterFS 3.6.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1117822>
10:13 recidive joined #gluster
10:21 rolfb joined #gluster
10:25 diegows joined #gluster
10:45 maxxx2014 I seem to be having a bottleneck on a glusterfs client when doing writes of large files. The /usr/sbin/glusterfs process is almost maxing out a core, yet the throughput is only 12MB/sec. gluster is able to support much faster writes (verified with large DD's) and the source is too (also verified with large DD's). I read chapter 11 (and others) of the redhat storage documentation, but there
10:45 maxxx2014 was nothing in there to speed up clients. I've google for information to debug this, but could not find anything yet.
10:45 maxxx2014 Could someone help me figure out why the client is maxing out a core on the server and achieving very low write speeds to gluster?
10:46 liquidat joined #gluster
10:46 maxxx2014 (the client is maxing out a core on the client, sorry)
10:52 ppai joined #gluster
10:52 kkeithley1 joined #gluster
11:03 getup- joined #gluster
11:10 loomsen left #gluster
11:21 chirino joined #gluster
11:22 todakure joined #gluster
11:23 deepakcs joined #gluster
11:27 rjoseph joined #gluster
11:28 shubhendu joined #gluster
11:29 lalatenduM ranjan, did it work for you by chance?
11:30 ranjan lalatenduM, no :(
11:31 ranjan lalatenduM, can you let me know on how to completely reset gluster installation
11:31 ranjan lalatenduM, coz, now the mounted volumes are behaving like readonly
11:32 lalatenduM ranjan, do you know the root cause of readonly behavior ? if not check if your bricks are readonly?
11:33 nbalachandran joined #gluster
11:33 ranjan lalatenduM, not sure, now i am deleting all volumes
11:33 edward1 joined #gluster
11:33 lalatenduM ranjan,  you should uninstall rpms, delete gluster dir in /etc and /var/log
11:34 lalatenduM ranjan, clean up the brick partitions too
11:34 ranjan lalatenduM, ok let me try
11:35 ranjan lalatenduM, oh god! removing gluster is rmoving my core openstack packages
11:35 lalatenduM ranjan, the vols should have worked , never behaved so oddly for me
11:36 lalatenduM ranjan, is it? ohh
11:36 lalatenduM ppai, does removing gluster should remove core openstack packages?
11:37 ppai lalatenduM, by "gluster" you mean gluster-swift ?
11:38 ppai lalatenduM, https://github.com/gluster/gluster-swift/blob/master/glusterfs-openstack-swift.spec#L23-L32
11:38 lalatenduM ranjan ^^, can you plz ans ppai 's question?
11:38 glusterbot Title: gluster-swift/glusterfs-openstack-swift.spec at master · gluster/gluster-swift · GitHub (at github.com)
11:38 lalatenduM ppai, checking
11:39 ranjan lalatenduM, ppai just removing glusterfs is removing qemu-kvm openstack nova-compute etc
11:39 ppai ranjan, it shouldn't remove nove, compute etc..
11:40 ranjan ppai, it is in my case, i can paste the yum if you want
11:40 ranjan ppai, i am using the official gluster repo
11:41 ppai ranjan, okay, i know only about gluster-swift and swift packages but send the yum output, i'll take a  look
11:42 ranjan ppai, http://fpaste.org/129278/92261541/
11:42 glusterbot Title: #129278 Fedora Project Pastebin (at fpaste.org)
11:46 karnan joined #gluster
11:48 lalatenduM ranjan, one question, when you 1st created the volume, did you create a replicate 3 volume?
11:49 ranjan lalatenduM, yes
11:49 ranjan lalatenduM, then i deleted it
11:49 lalatenduM ranjan, how did you delete it?
11:50 ranjan lalatenduM, that was the first time i was deleting the volume,
11:50 ranjan lalatenduM, deleted using gluster volume delete <vol name>
11:50 lalatenduM ranjan, I am sure you have not deleted the nova dir on /storage, but created the next distribute volume using the same nova dirs
11:51 ranjan lalatenduM,  i remember deleting .glusterfs directory manually
11:51 ranjan lalatenduM, but i tried with entirely a different directory too
11:52 lalatenduM ranjan, deleting .glusterfs is not enough :)
11:52 lalatenduM @reusingbricks
11:52 ranjan lalatenduM, then i deleted using gluster volume delete and it showed success
11:53 lalatenduM @sambavfs
11:53 glusterbot lalatenduM: http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/
11:53 lalatenduM @bricks
11:53 lalatenduM @brick
11:53 glusterbot lalatenduM: I do not know about 'brick', but I do know about these similar topics: 'brick naming', 'brick order', 'brick port', 'clean up replace-brick', 'former brick', 'reuse brick', 'which brick'
11:53 lalatenduM @reusebrick
11:53 glusterbot lalatenduM: I do not know about 'reusebrick', but I do know about these similar topics: 'reuse brick'
11:53 lalatenduM @reuse brick
11:53 glusterbot lalatenduM: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
11:54 ira joined #gluster
11:54 lalatenduM ranjan, check the above blog to reuse bricks
11:54 lalatenduM ranjan, the issue you were seeing because extended attributes were already set on the bricks as replicated volume
11:55 ranjan lalatenduM, oh!  but when we delete the the brick as such the extended attribute  will be also gone right
11:56 lalatenduM ranjan, that is actually not done for a purpose
11:57 ranjan lalatenduM, so how should i proceed
11:57 lalatenduM ranjan, when you delete a volume glusters deletes the vol, but the brick dont delete data
11:57 ranjan lalatenduM, but what happens if i delete the brick itself
11:57 ranjan ?
11:57 lalatenduM ranjan, because we dont want anyone to loose data because of a mistake
11:57 lalatenduM ranjan, yeah you have delete the brick it self to clean the data
11:57 lalatenduM s/have/have to/
11:58 glusterbot What lalatenduM meant to say was: ranjan, yeah you have to delete the brick it self to clean the data
11:58 ranjan lalatenduM, who can i list all the extended attributes
11:59 chirino_m joined #gluster
11:59 lalatenduM ranjan, getfattr -d -m . <file/dir>
12:00 ranjan lalatenduM, ok will use that also to make sure i dont make a mistake
12:00 ranjan lalatenduM, let me try
12:00 ranjan lalatenduM, now i have added one more node
12:00 lalatenduM ranjan, I use this script to clean bricks https://github.com/LalatenduMohanty/utility_scripts/blob/master/cleaning_xattrs.pl
12:00 ranjan lalatenduM, okies
12:01 lalatenduM ranjan, if you want to delete nova volume, fist delete the vol through gluster comamnd, then rm -rf /storage/nova on all the nodes
12:01 ranjan lalatenduM, i am doing that
12:03 LebedevRI joined #gluster
12:04 topshare joined #gluster
12:04 ranjan lalatenduM, ok now its showing no volumes present in the cluster
12:04 ranjan lalatenduM, now from the server i am going to create  the volume
12:06 ranjan lalatenduM, volume created
12:07 LHinson joined #gluster
12:08 ranjan lalatenduM, again its behaving readonly
12:08 ranjan lalatenduM, :(
12:09 lalatenduM ranjan, can you plz check if your brick i.e. /storage/nova is writable
12:09 lalatenduM ranjan, also plz share the command you used to mount it
12:11 ranjan yes /storage/nova is writeable
12:11 ranjan lalatenduM, i mounted it using mount.glusterfs node07.example.com:/mynova /mnt
12:12 lalatenduM ranjan, mount cmd is fine
12:13 lalatenduM ranjan, did you check all bricks
12:14 mojibake joined #gluster
12:14 sputnik13 joined #gluster
12:15 ranjan lalatenduM, yes all bricks are writeable
12:15 lalatenduM ranjan, what abt the size of the mount point? is it around 500GB
12:15 itisravi joined #gluster
12:16 ranjan lalatenduM, mountpoint is 559 gb
12:17 lalatenduM ranjan, thats sad, one last request, can plz reformt the /storage on all the nodes
12:17 liquidat_ joined #gluster
12:18 lalatenduM ranjan, unmount the vol, delete the vol, unmount /storage, reformat (xfs) , mount it, recreate the vol
12:19 RaSTar joined #gluster
12:19 ranjan lalatenduM, ok
12:19 ranjan lalatenduM, should i try with less number of peers?
12:19 ranjan lalatenduM, by removing the peer lists too?
12:20 Guest96472 joined #gluster
12:21 rjoseph joined #gluster
12:21 lalatenduM ranjan, two node is fine, no need to change the peer list
12:22 lalatenduM ranjan, I have to leave now will login again after 3 hours, you can leave me a msg for with glusterbot
12:22 lalatenduM @later
12:22 glusterbot lalatenduM: I do not know about 'later', but I do know about these similar topics: 'latest'
12:23 lalatenduM glusterbot, @later tell lalatenduM blah blah..
12:23 lalatenduM ranjan, ttyl bye
12:24 kkeithley_ @later, tell lalatenduM to enjoy his three day weekend
12:24 kkeithley_ @later tell lalatenduM to enjoy the three day weekend
12:24 glusterbot kkeithley_: The operation succeeded.
12:25 bala joined #gluster
12:30 topshare joined #gluster
12:33 julim joined #gluster
12:39 getup- joined #gluster
12:39 julim joined #gluster
12:41 Guest96472 joined #gluster
12:42 B21956 joined #gluster
12:43 LebedevRI joined #gluster
12:54 theron joined #gluster
12:55 recidive joined #gluster
12:57 ranjan hi all , can someone help me out in opening required ports in iptables for gluster to work
12:57 ranjan when i flush iptables everything is workin
12:58 bennyturns joined #gluster
13:00 ndevos ~ports | ranjan
13:00 glusterbot ranjan: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
13:01 dusmant joined #gluster
13:06 bene2 joined #gluster
13:06 chirino joined #gluster
13:09 chirino joined #gluster
13:18 chucky_z joined #gluster
13:19 chucky_z hello, in the case of heal-failed files that are only displaying a gfid -- can I do some kind of forceful overwrite on them?
13:23 ekuric joined #gluster
13:34 tdasilva joined #gluster
13:34 coredump joined #gluster
13:35 maxxx2014 joined #gluster
13:35 bala joined #gluster
13:48 andreask joined #gluster
13:50 nbvfuel joined #gluster
13:55 coredump joined #gluster
13:56 kumar joined #gluster
13:59 theron joined #gluster
14:04 diegows joined #gluster
14:04 dberry joined #gluster
14:04 dberry joined #gluster
14:11 ekuric joined #gluster
14:11 getup- joined #gluster
14:12 lmickh joined #gluster
14:12 wushudoin joined #gluster
14:13 anands left #gluster
14:18 Norky joined #gluster
14:19 bennyturns joined #gluster
14:26 xleo joined #gluster
14:27 xleo joined #gluster
14:27 bennyturns joined #gluster
14:33 bala joined #gluster
14:35 _Bryan_ joined #gluster
14:44 recidive joined #gluster
15:06 edwardm61 joined #gluster
15:26 SteveCooling joined #gluster
15:28 nbvfuel We're inheriting servers with a single (large) Ext4 mount (RAID10 drives) mounted at /.  The doc (and gluster tool) recommend not mounting to this partition.  What are we giving up by doing it?
15:30 emi2 left #gluster
15:34 mojibake joined #gluster
15:40 mojibake joined #gluster
15:44 bjornar joined #gluster
15:48 sputnik13 joined #gluster
15:51 ndevos nbvfuel: it is unfortuntate if users of a network volume/export/share can fill-up the / filesystem and cause problems on the server
15:52 gmcwhistler joined #gluster
15:52 nbvfuel Ahh, OK.  But if we understand our growth rate and plan accordingly, we should be OK.
15:52 ndevos nbvfuel: you also will see unexpected usage/free in the output of 'df' for volumes
15:52 nbvfuel I wasn't sure if there were performance considerations/contention around the disk.
15:53 nueces joined #gluster
15:53 ndevos and, volume-snapshot in 3.6 needs bricks on their own logical volume (lvm)
15:53 nbvfuel OK-- df, with respect to mounted gusterfs volumes?
15:53 glusterbot nbvfuel: OK's karma is now -1
15:53 ndevos yes
15:53 theron joined #gluster
15:54 ndevos and performance could be an issue, disks for one volume affect performance for other volumes, and even the disks for your OS
15:54 nbvfuel Great, thanks for the insight.
15:55 nbvfuel What's the difference between 3.4 vs 3.5.  If I'm planning to roll this out to a 'new' environment soon, should I be targeting 3.5?
15:55 ndevos 3.5 should be stable, and will be supported a little longer than 3.4
15:55 dtrainor joined #gluster
15:56 nbvfuel In my research, the prevailing wisdom (for serving static web assets, for example) is to use an NFS client instead of the fuse client.  Is this still true?
15:58 ranjan @later, tell lalatenduM the issue is fixed. problem was with iptables and some new ports introduced in the recent releases
15:58 ndevos yes, the Linux NFS client is more optimized for small files then the FUSE part, so using nfs probably benefits you
15:58 nbvfuel I understand you have to give up "auto failover" going that route, too.  Any other points to consider?  The static assets are mainly images/css, ranging from 1KB-500KB files.
15:59 ndevos ranjan: you need to use @later without the , otherwise it wont work (I think)
15:59 ranjan @later tell lalatenduM the issue is fixed. problem was with iptables and some new ports introduced in the recent releases
15:59 glusterbot ranjan: The operation succeeded.
15:59 ndevos ranjan: :)
15:59 ranjan ndevos, thank you :)
16:00 ranjan @later tell lalatenduM Thank you for your time
16:00 glusterbot ranjan: The operation succeeded.
16:00 ndevos nbvfuel: you should consider rrdns and virtual-ips for mounting the nfs exports
16:03 nbvfuel ndevos: Thanks!  I'll add that to my list.  To start, we'll be running a client on the server as well (web servers are also the file servers in this setup.)  New web servers will be strictly clients and as we grow we may put 'true' web servers in front of the original gluster to make them strictly file servers.  Terrible idea to run client and server on the same machine?
16:04 ndevos nbvfuel: not a 'terrible' idea, but many users will not want to change storage servers too much, or have additional software on them
16:05 ndevos nbvfuel: I guess it also is a question on how you need your webfarm to scale, setting up temporary/additional webservers is relatively easy, adding/removing storage servers is more work
16:07 nbvfuel ndevos: The storage is strictly for web server assets.  The goal is to remove single points of failure with each web server having a full [gluster] mirror of data.  If we grow to need more web servers, they can easily be clients.
16:11 PeterA joined #gluster
16:11 diegows joined #gluster
16:12 hagarth joined #gluster
16:14 dusmant joined #gluster
16:16 ira joined #gluster
16:18 PeterA i think i catch a bug?
16:18 ekuric1 joined #gluster
16:18 PeterA when i do a ls in dir as non-pri user, and hit a dir that i can not read with permission denied,
16:18 PeterA i got these in brick log:
16:18 PeterA [2014-08-28 16:16:34.324009] E [marker.c:327:marker_getxattr_cbk] 0-sas03-marker: dict is null
16:18 PeterA [2014-08-28 16:16:34.324047] E [server-rpc-fops.c:796:server_getxattr_cbk] 0-sas03-server: 8647789: GETXATTR /DevMordorHomeSata03//jrose (043f2d28-7bf4-44ef-8b91-a68bf004058d) ((null)) ==> (Permission denied)
16:19 PeterA i think i should file a big
16:19 PeterA file a bug
16:19 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
16:19 semiosis PeterA: is there a problem, or is it just a log message?
16:19 PeterA log message
16:19 daMaestro joined #gluster
16:20 PeterA i can say it's cosmetic
16:20 PeterA but it's annoying as we do monitor our logs
16:20 PeterA and this could get VERY annoying and make us ignore some real issue
16:23 PeterA and this only happen over gNFS
16:23 PeterA if i mount it from glusterfs and do the same as non-pri user, no error
16:27 bala joined #gluster
16:30 semiosis interesting
16:31 bmikhael joined #gluster
16:31 semiosis seems like it should at least be a warning (W) instead of error (E)
16:31 semiosis probably worth filing a bug about it imo
16:31 PeterA YES!!
16:31 PeterA just did
16:32 PeterA https://bugzilla.redhat.com/show_bug.cgi?id=1135016
16:32 glusterbot Bug 1135016: medium, unspecified, ---, gluster-bugs, NEW , gettxattr and other filesystem ops error over gluster NFS
16:32 coredump joined #gluster
16:32 recidive joined #gluster
16:33 LHinson joined #gluster
16:33 LHinson joined #gluster
16:43 m0zes joined #gluster
16:45 glusterbot New news from newglusterbugs: [Bug 1135016] gettxattr and other filesystem ops error over gluster NFS <https://bugzilla.redhat.com/show_bug.cgi?id=1135016>
16:47 andreask joined #gluster
16:49 dtrainor joined #gluster
16:50 anoopcs joined #gluster
16:59 zerick joined #gluster
17:00 theron joined #gluster
17:07 recidive joined #gluster
17:08 bala joined #gluster
17:19 ricky-ticky joined #gluster
17:22 andreask joined #gluster
17:23 diegows joined #gluster
17:26 bene2 joined #gluster
17:41 jbrooks left #gluster
17:42 jbrooks_ joined #gluster
17:47 glusterbot New news from newglusterbugs: [Bug 1131271] Lock replies use wrong source IP if client access server via 2 different virtual IPs [patch attached] <https://bugzilla.redhat.com/show_bug.cgi?id=1131271> || [Bug 1077406] Striped volume does not work with VMware esxi v4.1, 5.1 or 5.5 <https://bugzilla.redhat.com/show_bug.cgi?id=1077406>
17:47 longshot902 joined #gluster
17:52 plarsen joined #gluster
17:52 LHinson joined #gluster
17:55 _dist joined #gluster
17:57 prasanth_ joined #gluster
17:58 kumar joined #gluster
18:01 clyons joined #gluster
18:06 theron_ joined #gluster
18:06 andreask joined #gluster
18:11 jobewan joined #gluster
18:25 ira joined #gluster
18:29 qdk joined #gluster
18:38 elico joined #gluster
18:40 LHinson joined #gluster
18:56 bene joined #gluster
19:12 nated is there a less brittle way to parsing the output of geo-replications's status info?
19:13 nated --mode=script has no effect
19:15 semiosis nated: do you have libxml2 installed?
19:15 nated probably
19:15 nated if nto, I can install it
19:15 nated but --xml yields some less than useful information (like 4 lines)
19:16 * DV open an eye ...
19:16 semiosis sorry, confused --mode=script w/ --xml
19:16 nated no, --mode=script has no effect on teh geo-replication status output, and --xml yields no useful info
19:17 nated let me pastebin this, one sec
19:17 semiosis --mode=script has no effect on gluster volume info for me.  can you give an example where it does something?
19:18 nated http://fpaste.org/129416/40925347/
19:18 glusterbot Title: #129416 Fedora Project Pastebin (at fpaste.org)
19:19 nated --mode=script seems to remove prompts on things like volume delete
19:19 semiosis right
19:19 semiosis also, what version are you using?
19:20 nated 3.5.2 on ubuntu 12.04
19:20 nated (using my own custom debs based from yours but with tweaks to make geo-replication go)
19:21 semiosis care to share your changes?
19:21 nated did I forget to update teh bugzilla bug with an updated patch?
19:22 semiosis oh that's you
19:22 semiosis great
19:22 nated yah
19:23 semiosis bug 1132766
19:23 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1132766 unspecified, unspecified, ---, gluster-bugs, NEW , ubuntu ppa: 3.5 missing hooks and files for new geo-replication
19:23 semiosis attachment there is from 22 Aug
19:24 semiosis do you have updates since then?
19:24 nated yes
19:25 nated can I jsut attach my working debian tarball to that patch versus a diff?
19:26 nated I blew away that build dir and don't have time to regenerate it for a proper diff
19:28 semiosis fine with me
19:28 nated attached that
19:28 semiosis thanks
19:29 semiosis i've started using github to manage the debian packages, https://github.com/semiosis/glusterfs-debian
19:29 glusterbot Title: semiosis/glusterfs-debian · GitHub (at github.com)
19:29 semiosis just fyi
19:29 m0zes joined #gluster
19:29 semiosis i'll merge your changes in manually
19:29 nated note: I don't have a lot of experince in deb packaging... so I;m translating a bit from rpm, def. check my work :)
19:29 semiosis heh, i'm no expert
19:29 nated ahh, cool
19:29 semiosis but i'll do my best to get it in
19:30 nated if I have some time this wevening or over the weekend, I'll try to just make a intelligent pr
19:30 semiosis cool
19:37 B21956 joined #gluster
19:41 chirino joined #gluster
19:55 recidive joined #gluster
20:16 nated so yeah, any reason why --xml doesn't return anythign useful for geo-replication?
20:16 nated http://fpaste.org/129416/40925347/
20:16 glusterbot Title: #129416 Fedora Project Pastebin (at fpaste.org)
20:17 semiosis probably should file a bug about that
20:17 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:19 _dist JoeJulian: it looks like Frank77 and myself have convinced proxmox to up their gluster to 3.5.2. So now that it should be stable soon in their package I have to ask why earlier on you said to run 3.4.5 instead of 3.5.2 for VM hosting ?
20:33 bene joined #gluster
20:36 sputnik13 joined #gluster
20:47 glusterbot New news from newglusterbugs: [Bug 1135116] geo-replication: --xml output broken and/or misleading <https://bugzilla.redhat.com/show_bug.cgi?id=1135116>
21:11 mbukatov joined #gluster
21:20 daMaestro joined #gluster
21:25 tankenmate joined #gluster
21:25 tankenmate is georep production ready?
21:26 nated in 3.5?
21:28 tankenmate is the ubuntu ppa 3.5 yet?
21:28 tankenmate it's been a while since i checked
21:28 nated not without some hackery
21:29 nated well, yes, 3.5 is in the ubuntu ppa, geo-rep is currently broken in the ppa'ed packages
21:29 tankenmate hmm 3.5qa is listed in the Packages file
21:29 tankenmate nated: ahh :(
21:29 tankenmate nated: does gluster support "write mostly" nodes?
21:30 nated https://launchpad.net/~semiosis/+archive/ubuntu/ubuntu-glusterfs-3.5
21:30 glusterbot Title: ubuntu-glusterfs-3.5 : Louis Zuckerman (at launchpad.net)
21:30 tankenmate yeah i just looked and saw that (at least i'm looking in the right place)
21:30 nated I have some patches in bugzilal to get the geo-rep bits fixes in teh packages, but testing so far is... shaky
21:30 semiosis moving the PPAs to launchpad.net/~gluster -- new releases will be made there
21:31 tankenmate yeah, i need this for production on a govt tender in about 3-4 months time :/
21:31 tankenmate semiosis: ta :)
21:31 semiosis i'll get nated's patches merged, possibly today, and you can test start testing soon tankenmate
21:31 semiosis yw
21:31 tankenmate w00t! much excellence :)
21:32 nated there's a bunch of fixes in 3.6 that seemingly will make geo-rep better
21:32 tankenmate so does gluster support a "write mostly" node member?
21:33 tankenmate because my other option would be dmraid with a netbd via ssh :/
21:33 tankenmate write mostly & write back
21:33 nated i don't understand your term 'write mostly'
21:34 tankenmate as in it suppresses reads to the write mostly node
21:34 tankenmate i.e. it acts somewhat like a streaming backup
21:34 semiosis that's geo-rep afaict
21:35 tankenmate is geo-rep real time? or scheduled?
21:35 semiosis geo-rep is real time, asynch
21:35 semiosis so there may be a little delay, but it's continuous
21:35 tankenmate cool, that is indeed what i am looking for
21:35 semiosis and of course not really a backup, since you can't restore from a previous state.  it's just another replica
21:36 tankenmate dmraid in th linux kernel does it by effectively giving the device an infinite scheduling penalty for reads
21:36 semiosis however if you were using zfs or lvm to snapshot the geo-rep target, then i'd call it backups
21:36 tankenmate yeah i need to a real time replica for a first level site wide failover; i.e. first level in a disaster plan
21:37 tankenmate yeah, i'd be doing snapshots from the replica to reduce load on the production systems
21:39 tankenmate semiosis: thanks for your help and I'll keep an eye out for 3.6 + patches and i'll test them
21:39 semiosis yw
21:39 tankenmate ciao
21:39 semiosis it'll be 3.5.2, not 3.6
21:40 tankenmate ahh, ok, thanks
21:40 semiosis and the patches are just to the packaging, not the glusterfs source
21:40 semiosis it'll be in ppa:gluster/glusterfs-3.5 in the next couple/few days.
21:40 tankenmate so it is a packaging bug that prevents geo-rep from working?
21:40 semiosis yes
21:40 tankenmate ok cool
21:41 tankenmate i probably won't be able to set it up / test it until after the 15th or so
21:41 semiosis ok
21:41 tankenmate i'll get it up and running on DO and let you know how it works out
21:41 semiosis great
21:42 semiosis DO?
21:42 tankenmate ciao
21:42 tankenmate digitalocean
21:42 semiosis ahh nice
21:42 semiosis ok have a good one
21:42 tankenmate i might even do a write up for DO's knowledge base on how to set it up properly; so once i get it going i might ask for some pointers in case i made any mistakes
21:43 semiosis sounds good
21:43 tankenmate ta muchly, ciao
21:43 tankenmate left #gluster
22:01 recidive joined #gluster
22:03 marcoceppi_ joined #gluster
22:05 vxitch joined #gluster
22:05 balacafalata joined #gluster
22:05 vxitch hello! is it possible to change the replica count of an existing volume?
22:06 PeterA how can i found this in ubuntu??
22:06 PeterA glusterfs/api/glfs-handles.h
22:08 vxitch why is glusterfs-server not available as a package from centos repo? or is it? it only appears in epel for i686
22:26 wgao_ joined #gluster
22:27 plarsen joined #gluster
22:30 evanjfraser left #gluster
22:30 marmalodak left #gluster
22:32 wgao_ joined #gluster
22:32 sputnik13 so...  I have a theory that gluster is causing some of the hangups on my openstack compute nodes...  reason being...  when a VM is destroyed, the kvm process goes defunct...  then subsequently trying to delete the image that the VM was using likewise hangs, while I'm able to delete other images just fine...
22:33 sputnik13 does this make sense or am I barking up the wrong tree?
22:33 sputnik13 this doesn't happen consistently but frequently enough to be a big issue for us
22:44 nated hurm.  I have hosts A, B, and C.  A and B both geo-replicate their volumes to C, and C then geo-replicates B to A, and A to B
22:44 nated the replication sessions from C to A and B work fine, the replciation from A to C and B to C both fail to start
22:46 nated it seems the secret.pem key that A and B copy to C are sinfle use keys, but "start" command then calls a command (gluster --xml) not defined by the keys
22:46 nated so the start fails with a ssh permission denied
22:46 nated even though the start command returns a success value
22:54 nated oh, looks like C updates the common_secret.pem.pub on A and B, so they push the wrong creds back to C
23:12 _pol joined #gluster
23:13 nueces joined #gluster
23:23 elico joined #gluster
23:24 rotbeard joined #gluster
23:30 topshare joined #gluster
23:47 jermudgeon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary