Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-08-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 jmarley joined #gluster
00:39 overclk joined #gluster
00:47 leucos joined #gluster
00:50 frankS2 joined #gluster
00:53 overclk joined #gluster
00:58 jdossey joined #gluster
00:58 Zhang joined #gluster
00:59 mjrosenb why does gluster want to use a directoy inside of a mountpoint, rather than just the mountpoint?
01:00 Mr_Psmith joined #gluster
01:12 7GHAA1BJW joined #gluster
01:16 overclk_ joined #gluster
01:23 overclk joined #gluster
01:33 Lee1092 joined #gluster
01:35 overclk joined #gluster
01:43 badone_ joined #gluster
01:48 overclk joined #gluster
01:53 mjrosenb fantastic, it looks like gluster is crashing.
01:57 spcmastertim joined #gluster
01:57 volga629 is that true libvirt: Storage Driver error : this function is not supported by the connection driver: storage pool does not support volume creation ?
02:00 nangthang joined #gluster
02:05 haomaiwa_ joined #gluster
02:11 julim joined #gluster
02:13 dlambrig joined #gluster
02:15 dlambrig joined #gluster
02:18 harish joined #gluster
02:21 scubacuda_ joined #gluster
02:22 dlambrig joined #gluster
02:27 volga629 why I would see image like this
02:27 volga629 ???????????  ? ?    ?              ?            ? canlipa01.qcow2
02:27 dlambrig joined #gluster
02:35 bharata-rao joined #gluster
02:41 dlambrig joined #gluster
02:47 scubacuda_ joined #gluster
02:48 overclk joined #gluster
02:56 jmarley joined #gluster
03:00 gem joined #gluster
03:02 haomaiwa_ joined #gluster
03:18 maveric_amitc_ joined #gluster
03:20 vmallika joined #gluster
03:26 ppai joined #gluster
03:27 sakshi joined #gluster
03:29 [7] joined #gluster
03:37 kotreshhr joined #gluster
03:40 RameshN joined #gluster
03:46 dlambrig joined #gluster
03:49 shubhendu joined #gluster
03:50 kotreshhr left #gluster
03:57 kanagaraj joined #gluster
04:01 64MADOISQ joined #gluster
04:02 ppai joined #gluster
04:05 kkeithley1 joined #gluster
04:10 itisravi joined #gluster
04:11 dlambrig joined #gluster
04:13 itisravi joined #gluster
04:14 atinm joined #gluster
04:14 ramteid joined #gluster
04:16 poornimag joined #gluster
04:27 overclk joined #gluster
04:35 anil joined #gluster
04:36 yazhini joined #gluster
04:37 skoduri joined #gluster
04:38 gem joined #gluster
04:38 hgowtham joined #gluster
04:39 scubacuda_ joined #gluster
04:40 ndarshan joined #gluster
04:44 scubacuda_ joined #gluster
04:46 atalur joined #gluster
04:48 Zhang joined #gluster
04:51 Zhang joined #gluster
04:52 raghug joined #gluster
05:02 haomaiwa_ joined #gluster
05:04 vimal joined #gluster
05:06 plarsen joined #gluster
05:08 mjrosenb joined #gluster
05:08 yazhini joined #gluster
05:08 mjrosenb I'm getting a strange error when I try to disable nfs.
05:09 mjrosenb volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again
05:09 mjrosenb also, once upon a time, there was a setting that allowed a brick to have multipe mount points in it
05:09 mjrosenb is this still somethign that can be done?
05:11 kotreshhr joined #gluster
05:12 ndevos mjrosenb: multiple mountpoints in a brick is not possible (AFAIK) since the .glusterfs directory contains hardlinks (.gluster/../../gfid <-> /dir/filename)
05:13 ndevos mjrosenb: and the error you get about the "connected clients cannot support the feature" means that a glusterfs client process with a too-low op-version is connected
05:13 jdossey joined #gluster
05:17 jmarley joined #gluster
05:18 meghanam joined #gluster
05:22 dusmant joined #gluster
05:23 TvL2386 joined #gluster
05:24 Bhaskarakiran joined #gluster
05:25 pppp joined #gluster
05:26 itisravi joined #gluster
05:30 hagarth joined #gluster
05:32 itisravi_ joined #gluster
05:38 itisravi joined #gluster
05:38 nishanth joined #gluster
05:41 overclk joined #gluster
05:41 mjrosenb how can I ask where clients are connected from?
05:42 mjrosenb so this totally worked before, I just didn't get the benefit that the hardlinks gave, but everything still worked
05:45 kdhananjay joined #gluster
05:46 ashiq joined #gluster
05:46 jiffin joined #gluster
05:47 Manikandan joined #gluster
05:48 deepakcs joined #gluster
05:50 rjoseph joined #gluster
05:50 hagarth joined #gluster
05:53 hchiramm_home joined #gluster
06:01 haomaiwa_ joined #gluster
06:02 ramky joined #gluster
06:02 skoduri joined #gluster
06:04 jwd joined #gluster
06:09 itisravi_ joined #gluster
06:10 baojg joined #gluster
06:12 glusterbot joined #gluster
06:13 VeggieMeat joined #gluster
06:14 ELCALOR joined #gluster
06:15 leucos joined #gluster
06:16 frankS2 joined #gluster
06:18 shubhendu joined #gluster
06:18 jtux joined #gluster
06:19 baojg joined #gluster
06:19 dusmant joined #gluster
06:21 ashiq- joined #gluster
06:21 nishanth joined #gluster
06:25 scubacuda joined #gluster
06:25 dlambrig joined #gluster
06:28 nbalacha joined #gluster
06:30 arcolife joined #gluster
06:34 dlambrig joined #gluster
06:36 hchiramm_home joined #gluster
06:40 hagarth joined #gluster
06:41 vimal joined #gluster
06:42 scubacuda joined #gluster
06:43 maveric_amitc_ joined #gluster
06:43 baojg joined #gluster
06:44 atalur joined #gluster
06:48 skoduri joined #gluster
06:49 sakshi joined #gluster
06:52 kotreshhr joined #gluster
06:53 Bhaskarakiran joined #gluster
06:53 skoduri_ joined #gluster
06:56 raghu joined #gluster
06:56 rafi joined #gluster
07:01 spalai joined #gluster
07:02 haomaiwa_ joined #gluster
07:03 deniszh joined #gluster
07:07 RameshN joined #gluster
07:09 nangthang joined #gluster
07:13 sakshi joined #gluster
07:16 Bhaskarakiran joined #gluster
07:28 kotreshhr joined #gluster
07:30 Guest39659 joined #gluster
07:46 primusinterpares joined #gluster
07:47 dusmant joined #gluster
07:48 shubhendu joined #gluster
08:01 haomaiwa_ joined #gluster
08:08 primusinterpares joined #gluster
08:13 Norky joined #gluster
08:14 Zhang joined #gluster
08:15 fsimonce joined #gluster
08:17 yazhini joined #gluster
08:19 RameshN joined #gluster
08:23 kshlm joined #gluster
08:25 LebedevRI joined #gluster
08:26 s19n joined #gluster
08:30 autoditac_ joined #gluster
08:35 muneerse2 joined #gluster
08:42 yazhini joined #gluster
08:42 haomaiwa_ joined #gluster
08:43 jcastill1 joined #gluster
08:48 jcastillo joined #gluster
08:48 Trefex joined #gluster
08:53 jcastillo joined #gluster
08:56 dlambrig joined #gluster
08:57 ctria joined #gluster
09:10 haomaiwa_ joined #gluster
09:22 vmallika joined #gluster
09:23 vmallika joined #gluster
09:28 patryck joined #gluster
09:28 patryck re
09:33 _Bryan_ joined #gluster
09:33 jcastill1 joined #gluster
09:37 dlambrig joined #gluster
09:37 ramky joined #gluster
09:39 jcastillo joined #gluster
09:43 Gugge joined #gluster
09:47 kdhananjay joined #gluster
09:50 meghanam joined #gluster
09:51 harish joined #gluster
09:55 elico joined #gluster
10:03 eMBee joined #gluster
10:04 rafi1 joined #gluster
10:05 Zhang joined #gluster
10:05 Manikandan joined #gluster
10:05 * eMBee read about a recommendation that local system and gluster disks should be separate because local disk operations could slow gluster down. but with a set of 5x6TB disks, that is not possible. is that really a concern?
10:10 haomaiwang joined #gluster
10:10 ramky joined #gluster
10:13 Zhang joined #gluster
10:13 RameshN joined #gluster
10:17 poornimag joined #gluster
10:24 kdhananjay joined #gluster
10:26 Zhang joined #gluster
10:34 ramteid joined #gluster
10:36 kotreshhr joined #gluster
10:36 anil joined #gluster
10:38 jcastill1 joined #gluster
10:38 Zhang joined #gluster
10:40 ramky joined #gluster
10:42 Zhang joined #gluster
10:43 jcastillo joined #gluster
10:47 meghanam joined #gluster
10:48 overclk joined #gluster
10:48 kkeithley1 joined #gluster
10:49 hagarth joined #gluster
11:01 ira joined #gluster
11:03 Zhang joined #gluster
11:04 ramky joined #gluster
11:10 Zhang joined #gluster
11:10 haomaiwang joined #gluster
11:12 nishanth joined #gluster
11:15 firemanxbr joined #gluster
11:16 arcolife joined #gluster
11:18 jrm16020 joined #gluster
11:23 arcolife joined #gluster
11:26 arcolife joined #gluster
11:31 dlambrig joined #gluster
11:38 kkeithley1 joined #gluster
11:39 Zhang joined #gluster
11:45 kotreshhr joined #gluster
11:47 autoditac_ joined #gluster
11:51 unclemarc joined #gluster
11:54 rafi joined #gluster
11:55 hchiramm_home joined #gluster
11:56 dlambrig joined #gluster
11:59 jcastill1 joined #gluster
12:02 kanagaraj joined #gluster
12:04 rjoseph joined #gluster
12:04 jcastillo joined #gluster
12:09 mazl joined #gluster
12:10 haomaiwa_ joined #gluster
12:14 pdrakeweb joined #gluster
12:17 jtux joined #gluster
12:19 Zhang joined #gluster
12:21 overclk joined #gluster
12:29 kotreshhr left #gluster
12:34 Jeroenpc joined #gluster
12:43 hchiramm_home joined #gluster
12:43 stickyboy joined #gluster
12:45 shyam joined #gluster
12:53 kshlm joined #gluster
12:54 ashiq- joined #gluster
12:58 mazl joined #gluster
12:59 mazl hello
12:59 glusterbot mazl: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:00 mazl what is the rsync interval between briks?
13:00 ashiq joined #gluster
13:01 shaunm joined #gluster
13:02 hagarth mazl: synchronous replication or geo-replication ?
13:02 kkeithley1 semiosis: ping. what do you think about moving the gluster-debian repo to ~gluster. I.e. changing the owner?
13:03 mazl @hagarth synchronous one
13:03 dlambrig joined #gluster
13:04 atrius joined #gluster
13:05 kkeithley1 there is no rsync interval. The client writes to both/all bricks.
13:05 hagarth mazl: with synchronous replication, updates get synchronously written to all bricks in a replica set.
13:06 mazl OK, I'm streaming data to files in glusterfs and i have a very poor performance
13:06 mazl the first week ( when the volume was empty) the I hadnt any problem
13:07 mazl but know (with 500Go) I have very poor perfs
13:07 mazl does the number of files / size impact the perf?
13:08 dusmant joined #gluster
13:10 haomaiwa_ joined #gluster
13:17 kshlm joined #gluster
13:17 hagarth mazl: what kind of performance is bad and how bad is it?
13:18 Xtreme gluster is bad..
13:18 Xtreme its slowing my serves down
13:18 Xtreme alot
13:19 Zhang joined #gluster
13:22 DV_ joined #gluster
13:24 RameshN joined #gluster
13:25 jonb joined #gluster
13:29 mpietersen joined #gluster
13:30 qubozik joined #gluster
13:31 Zhang joined #gluster
13:32 jonb Hello, I've got an 8-node replicated volume that I'm trying to recover a server that went down and am having trouble. When the server comes back online, Gluster connects to it just fine but the volume is unacceptably slow. Load on the replicated pair of servers is 14 on a 24 core machine. I tried disabling the self-heal daemon after finding that in a Gluster forum post. Any one have any suggestions? Thanks in advance.
13:34 mazl hagarth:im writing real time messages to text files. I receive +6000 msg/s but I can't write more than 2000msg/s
13:35 RameshN joined #gluster
13:35 bennyturns joined #gluster
13:37 Bonaparte left #gluster
13:37 dgandhi joined #gluster
13:45 Zhang joined #gluster
13:48 kanagaraj joined #gluster
13:50 jcastill1 joined #gluster
13:50 harold joined #gluster
13:50 neofob joined #gluster
13:55 jcastillo joined #gluster
14:00 jcastillo joined #gluster
14:01 arcolife joined #gluster
14:10 harish joined #gluster
14:10 haomaiwa_ joined #gluster
14:12 vmallika joined #gluster
14:19 Twistedgrim joined #gluster
14:21 arcolife joined #gluster
14:24 paescuj joined #gluster
14:26 RameshN joined #gluster
14:34 paescuj hi there. can somebody help me with the follwing question: i've a two node glusterfs setup with one replicated volume used with qemu (libgfapi). now, i have noticed that there is almost no read IO (avg 76 kbps) on the first node over a long period (monitored) while there is a lot on the second node (avg 17 mbps). how does that come? could there be a problem? thank you very much!
14:36 spalai left #gluster
14:44 papamoose joined #gluster
14:50 shubhendu joined #gluster
14:50 kshlm joined #gluster
14:58 ipmango joined #gluster
15:06 plarsen joined #gluster
15:08 Slashman joined #gluster
15:09 meghanam joined #gluster
15:10 haomaiwa_ joined #gluster
15:12 cyberswat joined #gluster
15:20 bennyturns joined #gluster
15:25 Zhang joined #gluster
15:29 vimal joined #gluster
15:29 skoduri joined #gluster
15:31 jcastill1 joined #gluster
15:36 Zhang joined #gluster
15:37 dlambrig joined #gluster
15:37 jcastillo joined #gluster
15:56 Vitaliy|2 joined #gluster
16:01 Vitaliy|2 Hi, need help. Have a 2x2 setup with 4 bricks and trying to remove two briks. Operation fails. Looking at the logs I see rebalance crashes. This is OpenSuse 13.2. Gluster 3.7.3
16:01 shyam1 joined #gluster
16:06 techsenshi joined #gluster
16:10 haomaiwang joined #gluster
16:14 kotreshhr joined #gluster
16:14 cholcombe joined #gluster
16:17 Vitaliy|2 No ideas anyone?
16:22 Vitaliy|2 Anyone alive?
16:25 plarsen joined #gluster
16:46 DV joined #gluster
16:48 mckaymatt joined #gluster
17:09 Vitaliy|2 Hi, need help. Have a 2x2 setup with 4 bricks and trying to remove two briks. Operation fails. Looking at the logs I see rebalance crashes. This is OpenSuse 13.2. Gluster 3.7.3
17:10 haomaiwa_ joined #gluster
17:10 jdossey joined #gluster
17:16 elico joined #gluster
17:17 spcmastertim joined #gluster
17:19 kotreshhr joined #gluster
17:19 shyam joined #gluster
17:20 mckaymatt joined #gluster
17:39 kotreshhr left #gluster
17:44 shaunm joined #gluster
17:49 elico joined #gluster
17:54 RameshN_ joined #gluster
17:54 mpietersen joined #gluster
17:56 mpietersen joined #gluster
18:02 Lee- joined #gluster
18:02 kshlm joined #gluster
18:05 jcastill1 joined #gluster
18:10 jcastillo joined #gluster
18:10 haomaiwa_ joined #gluster
18:16 elitecoder JoeJulian: Any thoughts on this? http://pastebin.com/Z435s9Uk
18:16 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:24 siel joined #gluster
18:24 hagarth joined #gluster
18:26 jwaibel joined #gluster
18:33 elitecoder sigh
18:40 elitecoder left #gluster
18:45 mpietersen joined #gluster
18:45 haomaiwa_ joined #gluster
18:53 mckaymatt joined #gluster
19:03 firemanxbr joined #gluster
19:04 kanarip joined #gluster
19:10 haomaiwang joined #gluster
19:11 mpietersen joined #gluster
19:12 mpietersen joined #gluster
19:30 autoditac_ joined #gluster
19:32 _maserati joined #gluster
19:35 plarsen joined #gluster
19:43 semiosis kkeithley: sounds good to me, i would just advise against merging it into the glusterfs source (which had been done in the past)
19:43 semiosis kkeithley: let me know how to proceed
19:43 spcmastertim joined #gluster
19:44 semiosis well, to clarify, in the past there were packaging files in the source tree for various distros, but that was too hard to work with.
19:44 semiosis so debian ignored them and did their own, as did I
19:46 haomai___ joined #gluster
19:46 rideh joined #gluster
19:49 Vitaliy|2 any help with rebalance operation crashing during brick removal?
20:04 autoditac_ joined #gluster
20:05 autoditac_ joined #gluster
20:06 julim joined #gluster
20:10 haomaiwa_ joined #gluster
20:10 arcolife joined #gluster
20:22 CyrilPeponnet Hey guys still struggling with volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again
20:22 CyrilPeponnet the point is I have 600 clients connected... which one is the culprit ?
20:22 CyrilPeponnet (running 3.6.4)
20:37 TheCthulhu1 joined #gluster
20:41 autoditac_ joined #gluster
20:49 autoditac_ joined #gluster
21:10 haomaiwa_ joined #gluster
21:12 badone_ joined #gluster
22:10 haomaiwang joined #gluster
22:12 ctria joined #gluster
22:28 B21956 joined #gluster
22:35 doekia joined #gluster
22:39 volga629 joined #gluster
22:55 Vitaliy|2 Still no one has a clue about crashing rebalance?
23:10 7JTAAJT22 joined #gluster
23:34 csim joined #gluster
23:34 kanarip joined #gluster
23:47 xiu joined #gluster
23:48 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary