Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-11-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 calavera joined #gluster
00:22 gem joined #gluster
00:28 18VAA049C joined #gluster
00:28 7YUAAANHD joined #gluster
00:30 mlhamburg joined #gluster
00:40 mlhamburg__ joined #gluster
00:46 beeradb joined #gluster
01:01 maserati|work|af joined #gluster
01:01 zhangjn joined #gluster
01:12 EinstCrazy joined #gluster
01:40 EinstCra_ joined #gluster
01:45 jmarley joined #gluster
01:49 R0ok_ joined #gluster
01:54 Lee1092 joined #gluster
02:02 suliba joined #gluster
02:10 DV_ joined #gluster
02:10 harish joined #gluster
02:16 calavera joined #gluster
02:21 haomaiwa_ joined #gluster
02:31 mikemol Nevermind. Found http://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/2.2.0/CentOS/
02:31 glusterbot Title: Index of /pub/gluster/glusterfs/nfs-ganesha/2.2.0/CentOS (at download.gluster.org)
02:31 mikemol Weird, but I don't understand why google couldn't find it for me, or that I needed it. I had to use gmane to search gluster-users...
02:32 nangthang joined #gluster
02:40 kotreshhr joined #gluster
02:52 DV_ joined #gluster
02:53 ayma joined #gluster
03:01 haomaiwa_ joined #gluster
03:11 nneul joined #gluster
03:11 bharata-rao joined #gluster
03:11 zhangjn joined #gluster
03:13 nneul Is it safe to remove empty directories under .glusterfs dir? I've got several small volumes on a particular set of servers to where the 65,000+ empty dirs just puts undue strain on backup system given that the volume itself may only contain a thousand or so files.
03:15 nneul Some of the volumes appear to have only a few dirs in there - I'm guessing that they once had files in them and the others never had that many distinct files so they never got created in the first place?
03:16 gildub joined #gluster
03:20 DavidVargese joined #gluster
03:22 David-Varghese joined #gluster
03:23 David_Vargese joined #gluster
03:24 shaunm joined #gluster
03:28 rlehtinen joined #gluster
03:29 rlehtinen test
03:29 rlehtinen hello
03:29 glusterbot rlehtinen: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
03:30 rlehtinen Is Ceph better than Gluster?
03:33 javi404 joined #gluster
03:53 ayma1 joined #gluster
04:01 haomaiwa_ joined #gluster
04:23 [7] joined #gluster
04:31 ramteid joined #gluster
04:33 calavera joined #gluster
04:36 cjellick joined #gluster
04:36 cjellick hi. im new to gluster. kicking the tires. ive create a volume and now i want to add a brick to it, just to prove that works, adding the brick went fine but when i try to rebalance i get an error saying "Volume data is not a distribute volume or contains only 1 brick."
04:36 cjellick any ideas?
04:37 cjellick heres a gist of the output for gluster volume info: https://gist.github.com/cjellick/e87043ce4e1dc45ae0b8
04:37 glusterbot Title: gist:e87043ce4e1dc45ae0b8 · GitHub (at gist.github.com)
04:38 Jmainguy cjellick: is it just 1 brick in the volume?
04:38 Jmainguy nm
04:39 Jmainguy cjellick: so you basically have a mirror across 4 bricks
04:39 Jmainguy cjellick: so there is nothing to rebalance, they are all identical copies
04:40 cjellick jmainguy i added the fourth brick
04:40 Jmainguy cjellick: it should have copies of the files on brick 4 now
04:41 cjellick was following this guide http://www.gluster.org/community/documentation/index.php/Gluster_3.1:_Expanding_Volumes
04:41 cjellick which said to rebalance as the last step (granted that is old, but cant find docs for 3.7.5)
04:41 cjellick btu yeah, no files in the fourth brick
04:43 Vaelatern joined #gluster
04:43 Jmainguy I *think you just need to rebalance if its a distributed, or distributed-replicated volume
04:43 Jmainguy dont think you need to rebalance on a replica volume
04:43 Jmainguy https://gluster.readthedocs.org/en/release-3.7.0/Features/rebalance/ mentions DHT alot
04:43 glusterbot Title: Rebalance - Gluster Docs (at gluster.readthedocs.org)
04:44 Jmainguy since adding the 4th brick didnt add or reduce storage size, I think your good
04:44 Jmainguy check volume status
04:44 Jmainguy might show the other one syncing or something
04:45 calavera joined #gluster
04:50 the-me joined #gluster
04:55 F2Knight joined #gluster
04:57 cjellick didnt find any syncig or anything but thanks for the help jmainguy. will keep digging
05:01 haomaiwang joined #gluster
05:02 Jmainguy cjellick: gluster volume status pulp detail
05:02 Jmainguy where pulp is name of your volume, gives some cool inf
05:03 cjellick cool. thanks
05:04 Jmainguy sure gl
05:04 Jmainguy gonna head off
05:17 vimal joined #gluster
05:18 nishanth joined #gluster
05:30 tdasilva joined #gluster
05:52 p0rtal joined #gluster
05:53 p0rtal_ joined #gluster
05:53 kovshenin joined #gluster
06:01 haomaiwa_ joined #gluster
06:07 ashka` joined #gluster
06:09 mlhamburg joined #gluster
06:09 DV_ joined #gluster
06:09 TheSeven joined #gluster
06:40 javi404 joined #gluster
06:56 cjellick joined #gluster
06:58 jwd joined #gluster
07:01 64MAEAPKQ joined #gluster
07:07 Philambdo joined #gluster
07:10 kovshenin joined #gluster
07:17 mhulsman joined #gluster
07:22 jtux joined #gluster
07:40 hagarth joined #gluster
07:44 wushudoin joined #gluster
07:44 linagee joined #gluster
07:44 armyriad joined #gluster
07:44 maserati|work|af joined #gluster
07:44 mlhess joined #gluster
07:44 cholcombe joined #gluster
07:44 prg3 joined #gluster
07:45 Ramereth joined #gluster
07:45 kotreshhr left #gluster
07:53 maserati|work|af joined #gluster
08:00 Vaelatern joined #gluster
08:00 hagarth joined #gluster
08:01 haomaiwa_ joined #gluster
08:02 Norky joined #gluster
08:06 deniszh joined #gluster
08:06 suliba joined #gluster
08:07 [Enrico] joined #gluster
08:09 morse joined #gluster
08:21 bhuddah joined #gluster
08:24 hagarth joined #gluster
08:40 anoopcs joined #gluster
08:51 ctria joined #gluster
08:54 p0rtal joined #gluster
09:01 haomaiwa_ joined #gluster
09:10 LebedevRI joined #gluster
09:10 crashmag_ joined #gluster
09:15 imilne joined #gluster
09:16 imilne left #gluster
09:20 DV joined #gluster
09:25 imilne joined #gluster
09:37 [Enrico] joined #gluster
09:48 6JTACGFQG joined #gluster
09:48 17SAD1FMG joined #gluster
09:53 p0rtal joined #gluster
10:01 haomaiwang joined #gluster
10:16 aravindavk joined #gluster
10:18 kovshenin joined #gluster
10:25 imilne left #gluster
10:25 imilne joined #gluster
10:31 RedW joined #gluster
10:33 jmarley joined #gluster
10:33 jmarley joined #gluster
10:35 kblin joined #gluster
10:35 kblin hi folks
10:43 kblin geo-replication in gluster is only ever available in a master/slave setup, right?
10:43 ndevos kblin: that is correct
10:43 kblin ah, pity
10:44 firemanxbr joined #gluster
10:45 ndevos kblin: some users setup two volumes to be a master, like: site-A/volume-A -> site-B/volume-A and site-B/volume-B -> site-A/volume-B
10:47 kblin ndevos: yeah, but the software I want to deploy at the moment can't deal with that added complexity yet
10:47 kblin it's a number-crunching setup with a work queue, so I don't know beforehand which side next will have a node available
10:47 ndevos kblin: right, it really depends on the use-case if you can set it up like that
10:49 kblin I can appreciate that active/active geo-replication probably is really, really tricky to pull off well
11:00 kovshenin joined #gluster
11:01 haomaiwang joined #gluster
11:17 gildub joined #gluster
11:21 haomaiwang joined #gluster
11:31 EinstCrazy joined #gluster
11:35 zhangjn joined #gluster
11:36 jmarley joined #gluster
11:39 marcoc joined #gluster
11:51 lpabon joined #gluster
11:53 RameshN joined #gluster
11:59 owlbot joined #gluster
12:02 zhangjn joined #gluster
12:13 owlbot joined #gluster
12:21 ira joined #gluster
12:22 jwaibel joined #gluster
12:23 yoavz_ joined #gluster
12:24 jwd_ joined #gluster
12:24 yoavz_ Hi, I’m having some weird issue. I have 4 volumes, 2 servers, 2 replica. 1 replica of every one of the 4 volumes on each server. I have a problem with one volume. Everything I delete from it comes back from the dead like it was done in a split-brain situation. Any ideas on how to solve this issue? I want to delete everything on this volume.
12:31 amye joined #gluster
12:32 curratore joined #gluster
13:02 nis joined #gluster
13:03 nis hi, I am trying to verify if write-behind works on client side , but as far as I see it doesn't work
13:04 nis assuming my client write data in 4K blocks, how can I write to buffer of , lets say 1M ?
13:05 nis according to gluster documentation this is done via translator write-behind ON option and cache-size option, but testing this with dd don't show any performance improvment
13:05 nis can anyone assist?
13:11 cabillman joined #gluster
13:18 kanagaraj joined #gluster
13:19 sage joined #gluster
13:24 anrao joined #gluster
13:26 DV joined #gluster
13:29 owlbot joined #gluster
13:31 tomatto joined #gluster
13:32 amye joined #gluster
13:36 marcoc Hi, we have a cluster with 5 storage nodes. It's the second time that issuing one command result in one glusterd die on one node (not the same)
13:37 jmarley joined #gluster
13:38 ahino joined #gluster
13:38 harish joined #gluster
13:41 haomaiwa_ joined #gluster
13:47 bennyturns joined #gluster
13:47 Mr_Psmith joined #gluster
13:52 hagarth joined #gluster
13:57 plarsen joined #gluster
14:01 kovshenin joined #gluster
14:01 haomaiwa_ joined #gluster
14:02 Norky_ joined #gluster
14:06 plarsen joined #gluster
14:10 shyam joined #gluster
14:10 dgandhi joined #gluster
14:15 hamiller joined #gluster
14:15 bennyturns joined #gluster
14:15 lpabon joined #gluster
14:15 crashmag joined #gluster
14:15 jtux joined #gluster
14:15 mswart joined #gluster
14:15 uebera|| joined #gluster
14:20 klaxa|work joined #gluster
14:28 haomaiwa_ joined #gluster
14:35 plarsen joined #gluster
14:49 cjellick joined #gluster
15:00 plarsen joined #gluster
15:01 haomaiwa_ joined #gluster
15:15 David_Varghese joined #gluster
15:27 bowhunter joined #gluster
15:38 cjellick joined #gluster
15:39 Gill joined #gluster
15:44 amye joined #gluster
15:45 ayma joined #gluster
15:47 overclk joined #gluster
15:47 amye joined #gluster
15:49 Gill_ joined #gluster
15:49 DV__ joined #gluster
15:50 overclk joined #gluster
15:50 RameshN joined #gluster
15:53 klaxa|work joined #gluster
15:53 klaxa|work left #gluster
16:00 kdhananjay joined #gluster
16:01 haomaiwa_ joined #gluster
16:04 skylar joined #gluster
16:12 klaas_ joined #gluster
16:13 whereismyjetpac1 joined #gluster
16:13 F2Knight_ joined #gluster
16:15 curratore_ joined #gluster
16:16 ChrisHolcombe joined #gluster
16:16 dan__ joined #gluster
16:17 gothos_ joined #gluster
16:18 cliluw joined #gluster
16:19 toddejohnson joined #gluster
16:28 timotheus1 joined #gluster
16:34 overclk joined #gluster
16:52 p0rtal joined #gluster
16:52 p0rtal joined #gluster
16:56 p0rtal joined #gluster
17:03 p0rtal joined #gluster
17:12 calavera joined #gluster
17:33 RameshN_ joined #gluster
17:34 shaunm joined #gluster
17:48 p0rtal joined #gluster
17:52 jmarley joined #gluster
17:54 armyriad joined #gluster
17:56 amye joined #gluster
17:57 amye left #gluster
18:08 calavera joined #gluster
18:19 Rapture joined #gluster
18:21 nage joined #gluster
18:25 chirino joined #gluster
18:43 Mr_Psmith joined #gluster
18:43 skylar joined #gluster
18:56 ahino joined #gluster
19:01 skylar1 joined #gluster
19:07 bluenemo joined #gluster
19:12 Pupeno joined #gluster
19:43 vincent__ joined #gluster
19:45 Vince joined #gluster
19:45 nis joined #gluster
19:45 p0rtal joined #gluster
19:46 Guest96915 Hi there, got a question about the geo-replication in gluster
19:47 Guest96915 ok so i generate a 2G file on my master node and on the remote site I noticed that the file get replicated several times, does anyone have an idea if it's the normal behavior ?
19:49 dblack joined #gluster
20:02 jmarley joined #gluster
20:08 mhulsman joined #gluster
20:14 DV__ joined #gluster
20:40 chirino joined #gluster
20:42 ghenry joined #gluster
20:42 ghenry joined #gluster
20:58 skylar1 joined #gluster
21:06 mhulsman joined #gluster
21:11 skylar1 joined #gluster
21:13 lpabon joined #gluster
21:30 papamoose joined #gluster
21:35 calavera joined #gluster
21:46 scooby2 joined #gluster
21:48 scooby2 Anyone know what would cause: 0-data-client-1: received RPC status error [Transport endpoint is not connected]
21:48 mlhamburg joined #gluster
21:48 scooby2 I've been banging my head on this all day
21:49 scooby2 gluster 3.7.6
21:51 skylar1 joined #gluster
22:00 amye joined #gluster
22:07 DV__ joined #gluster
22:42 skylar1 joined #gluster
22:55 dblack joined #gluster
23:02 harish joined #gluster
23:05 R0ok_ joined #gluster
23:08 ajhstn joined #gluster
23:09 ajhstn Morning, i am looking for the gluster SNMP MIBs, are there any?
23:24 gildub joined #gluster
23:27 calavera joined #gluster
23:30 plarsen joined #gluster
23:31 ajhstn Morning, i am looking for the gluster SNMP MIBs, are there any?
23:35 scooby2 ask and wait
23:35 scooby2 could take 6-12 hours
23:35 ajhstn more people joined the group since i asked, so will they see any history?
23:39 scooby2 no but the people that usually answer are idle
23:40 calavera joined #gluster
23:51 Alayde joined #gluster
23:56 Alayde Is anyone here available to help me troubleshoot some gluster issues I'm seeing? I'm getting some pretty poor performance versus our current NetAPP NFS solution. Like 1/6 the speeds :(

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary