Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 prg3 joined #gluster
00:27 nangthang joined #gluster
00:48 prg3 joined #gluster
00:49 rcschool joined #gluster
01:07 bjornar joined #gluster
01:09 gildub joined #gluster
01:17 rcschool joined #gluster
01:29 nsoffer joined #gluster
01:45 nangthang joined #gluster
02:07 nsoffer joined #gluster
02:30 sdb_ joined #gluster
02:31 nsoffer joined #gluster
02:34 here_and_there joined #gluster
02:36 sdb_ I'm looking to setup a STaaS service using gluster in a multi-tenancy environment. Where clients will connect directly to the storage over 1Gb/s ethernet using either NFS/CIFS. Has anyone build a similar setup and or offer any constructive advise?
02:44 bharata-rao joined #gluster
02:47 badone__ joined #gluster
03:01 rafi joined #gluster
03:06 badone__ joined #gluster
03:09 jiku joined #gluster
03:28 raghug joined #gluster
03:30 kumar joined #gluster
03:34 nbalacha joined #gluster
03:37 itisravi joined #gluster
03:38 haomaiwa_ joined #gluster
03:38 Anjana joined #gluster
03:41 fattaneh joined #gluster
03:51 harish joined #gluster
03:54 overclk joined #gluster
03:55 fattaneh1 joined #gluster
03:57 [7] joined #gluster
03:59 shubhendu_ joined #gluster
04:01 schandra joined #gluster
04:09 RameshN joined #gluster
04:13 kdhananjay joined #gluster
04:17 Bhaskarakiran joined #gluster
04:22 Bhaskarakiran joined #gluster
04:25 ndarshan joined #gluster
04:36 anoopcs joined #gluster
04:37 hagarth joined #gluster
04:46 sakshi joined #gluster
04:46 kshlm joined #gluster
04:53 soumya joined #gluster
04:55 karnan joined #gluster
04:56 gem joined #gluster
04:58 atinmu joined #gluster
04:58 vimal joined #gluster
04:59 Manikandan_ joined #gluster
04:59 ramteid joined #gluster
05:07 spandit joined #gluster
05:09 Apeksha joined #gluster
05:21 smohan joined #gluster
05:23 glusterbot News from newglusterbugs: [Bug 1218033] BitRot :- If scrubber finds bad file then it should log as a 'ALERT' in log not 'Warning' <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218033>
05:23 glusterbot News from newglusterbugs: [Bug 1218036] BitRot :- volume info should not show 'features.scrub: resume' if scrub process is resumed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218036>
05:23 glusterbot News from newglusterbugs: [Bug 1218032] Effect of Trash translator over CTR translator <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218032>
05:25 lalatenduM joined #gluster
05:25 anil joined #gluster
05:34 gem joined #gluster
05:43 saurabh_ joined #gluster
05:45 soumya joined #gluster
05:45 rcschool joined #gluster
05:52 kaushal_ joined #gluster
05:55 atalur joined #gluster
05:59 maveric_amitc_ joined #gluster
06:04 atinmu joined #gluster
06:13 soumya joined #gluster
06:14 deepakcs joined #gluster
06:16 Anjana joined #gluster
06:19 jtux joined #gluster
06:24 glusterbot News from newglusterbugs: [Bug 1217576] Gluster volume locks the whole cluster <https://bugzilla.redhat.co​m/show_bug.cgi?id=1217576>
06:32 spalai joined #gluster
06:32 spalai left #gluster
06:34 sas_ joined #gluster
06:36 myakove joined #gluster
06:38 spalai joined #gluster
06:42 atinmu joined #gluster
06:47 Philambdo joined #gluster
06:49 gem joined #gluster
06:51 anrao joined #gluster
06:53 nsoffer joined #gluster
06:54 kovshenin joined #gluster
06:58 kanagaraj joined #gluster
07:00 raghu joined #gluster
07:01 shubhendu_ joined #gluster
07:05 nishanth joined #gluster
07:08 rafi joined #gluster
07:09 gvandewe1er joined #gluster
07:09 gvandewe1er is it possible to provide different gluster options on a per client base? eg the cache size
07:13 poornimag joined #gluster
07:13 rafi joined #gluster
07:13 gvandewe1er I'm trying to get access to some re-used files cached, but they are larger than the RAM on one of the clients, so I'd like to reduce cache size for that client
07:15 meghanam joined #gluster
07:18 ppai joined #gluster
07:19 aravindavk joined #gluster
07:29 Peppard joined #gluster
07:36 fsimonce joined #gluster
07:38 maveric_amitc_ joined #gluster
07:40 DV joined #gluster
07:45 harish joined #gluster
07:50 Slashman joined #gluster
07:51 jvandewege joined #gluster
07:51 michatotol_ joined #gluster
07:51 soumya joined #gluster
07:51 liquidat joined #gluster
07:52 Pupeno joined #gluster
07:54 glusterbot News from newglusterbugs: [Bug 1218060] [SNAPSHOT]: Initializing snap_scheduler from all nodes at the same time should give proper error message <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218060>
07:54 liquidat joined #gluster
07:54 gothos Hey, have there been any changes from 3.6.3 beta 2 to 3.6.3 final?
07:55 Anjana joined #gluster
08:00 hchiramm gothos, https://www.mail-archive.com/glust​er-users@gluster.org/msg19963.html
08:00 hchiramm there should be some
08:05 kovshenin joined #gluster
08:06 ccha4 hello, on a client glusterfs crashed, is it normal that it created a core file in / and not in log path ?
08:07 ccha4 is it glusterfs client or fuse which created the core file ?
08:08 anoopcs ccha4: Try file <core-file-path> and see the output
08:09 kovshenin joined #gluster
08:10 ccha4 the core file is already deleted by a person
08:16 anoopcs AFAIK, glusterfs crashed will dump the core in / and that's normal.
08:16 anoopcs ccha4:
08:17 gothos hchiramm: thanks, for some reason I just didn't see that mail oO
08:19 anoopcs ccha4: You can manually core dump an actively running process by sending SIGQUIT signal to it via kill.
08:30 deniszh joined #gluster
08:34 shubhendu_ joined #gluster
08:36 kovsheni_ joined #gluster
08:37 malevolent joined #gluster
08:37 xavih joined #gluster
08:39 kovshenin joined #gluster
08:50 Pupeno joined #gluster
08:54 glusterbot News from newglusterbugs: [Bug 1218123] BitRot :- info about bitd and scrubber daemon is not shown in volume status <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218123>
08:56 cp3glfs joined #gluster
08:59 ccha4 anoopcs: is it possible toset core dump path ?
09:00 anoopcs ccha4: You mean gluster command line option for volume?
09:00 Debloper joined #gluster
09:01 ccha4 yes
09:01 gothos I just upgraded to 3.6.3 and started a self heal and I'm getting a lot of the following: http://fpaste.org/218119/30730066/
09:02 gothos Any idea how to resolv that, I can't really find anything on the net :/
09:03 gothos When I try to do an 'ls' on one of the gfid's in .glusterfs I get: ls: cannot access .glusterfs/c2/26/c2261d5d-a​f75-45d1-9d3e-281e189b95e8: Too many levels of symbolic links
09:06 fattaneh1 joined #gluster
09:07 cp3glfs joined #gluster
09:08 anoopcs ccha4: I don't think so.
09:10 hchiramm gothos, np :)
09:10 hchiramm gothos++
09:10 glusterbot hchiramm: gothos's karma is now 2
09:12 here_and_there do someone know whether gluster 3.4.2 is aimed to work with debian squeeze, pls?
09:13 hchiramm http://download.gluster.org/pub/gluster​/glusterfs/3.4/3.4.2/Debian/apt/dists/
09:13 here_and_there I got the link but was wandering whether there are some kind of dependencies on a specific libc version or kinda
09:13 hchiramm here_and_there, I see packages there , not tried though
09:14 here_and_there otherwise 3.3 and 3.4 should be compatible right?
09:14 here_and_there at least what I read iirc
09:15 hchiramm here_and_there, compatibility part I am not sure. more or less, 3.4 is almost EOL .
09:16 hchiramm kkeithley, would be the right person to check on debian builds.
09:16 here_and_there hchiramm: ok thanks
09:17 shubhendu_ joined #gluster
09:19 s19n joined #gluster
09:20 gem joined #gluster
09:24 nsoffer joined #gluster
09:24 gothos Hmm, shouldn't files in .glusterfs usually be symlinks, or am I missing something here?
09:26 LebedevRI joined #gluster
09:40 here_and_there trying to mount a remove volume and then I have "failed to get the port number for remote subvolume"
09:40 here_and_there is this the port mentioned with option transport.socket.remote-port in the glusterfs.vol ?
09:41 s19n Hi all, I am trying to expand by one server a GlusterFS 3.4 volume, distributed + replicated, 4x2 bricks on two servers
09:43 s19n basically I'd like to follow JoeJulian's scheme, but without using the (deprecated?) "replace-brick" command
09:44 s19n do you see any harm in increasing temporarily the replica to 3, and then back to 2 selectively removing bricks in order to leave only those "on the right server"?
09:48 Anjana joined #gluster
09:49 fattaneh1 joined #gluster
09:49 ndarshan joined #gluster
09:49 soumya joined #gluster
10:19 kripper joined #gluster
10:22 kanagaraj joined #gluster
10:25 kripper s19n: shouldn't be a problem
10:25 glusterbot News from newglusterbugs: [Bug 1022759] subvols-per-directory floods client logs with "disk layout missing" messages <https://bugzilla.redhat.co​m/show_bug.cgi?id=1022759>
10:25 karnan joined #gluster
10:25 kripper s19n: removing replica bricks doesn't even requiere a rebalance
10:25 kripper left #gluster
10:30 rafi1 joined #gluster
10:34 poornimag joined #gluster
10:36 raghug joined #gluster
10:38 hchiramm joined #gluster
10:45 aravindavk joined #gluster
10:46 kaushal_ joined #gluster
10:46 atinm joined #gluster
10:47 nbalacha joined #gluster
10:53 ira joined #gluster
10:55 glusterbot News from newglusterbugs: [Bug 1218164] [SNAPSHOT] : Correction required in output message after initilalising snap_scheduler <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218164>
10:55 glusterbot News from newglusterbugs: [Bug 1218167] [GlusterFS 3.6.3]: Brick crashed after setting up SSL/TLS in I/O access path with error: "E [socket.c:2495:socket_poller] 0-tcp.gluster-native-volume-3G-1-server: error in polling loop" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218167>
10:55 kotreshhr joined #gluster
11:04 soumya joined #gluster
11:07 ndarshan joined #gluster
11:08 kovshenin joined #gluster
11:16 firemanxbr joined #gluster
11:17 schandra joined #gluster
11:27 glusterbot News from resolvedglusterbugs: [Bug 1095179] Gluster volume inaccessible on all bricks after a glusterfsd segfault on one brick <https://bugzilla.redhat.co​m/show_bug.cgi?id=1095179>
11:38 soumya joined #gluster
11:54 fattaneh1 joined #gluster
11:55 glusterbot News from newglusterbugs: [Bug 1218243] quota/marker: turn off inode quotas by default <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218243>
12:03 itisravi joined #gluster
12:06 [Enrico] joined #gluster
12:15 Pupeno joined #gluster
12:16 poornimag joined #gluster
12:21 gem_ joined #gluster
12:24 raghu joined #gluster
12:27 shaunm_ joined #gluster
12:29 anoopcs joined #gluster
12:39 schandra joined #gluster
12:41 suliba joined #gluster
12:41 16WAAWFX8 joined #gluster
12:42 rafi joined #gluster
12:43 ppai joined #gluster
12:43 atinm joined #gluster
12:52 Pupeno joined #gluster
12:54 chirino joined #gluster
13:00 Pupeno joined #gluster
13:05 atalur joined #gluster
13:09 spalai left #gluster
13:10 rwheeler joined #gluster
13:21 nsoffer joined #gluster
13:22 hagarth joined #gluster
13:23 georgeh-LT2 joined #gluster
13:25 georgeh-LT2_ joined #gluster
13:25 glusterbot News from newglusterbugs: [Bug 1218273] [Tiering] : Attaching another node to the cluster which has a tiered volume times out <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218273>
13:28 plarsen joined #gluster
13:30 dgandhi joined #gluster
13:32 17SACHOZK joined #gluster
13:36 jobewan joined #gluster
13:39 lalatenduM__ joined #gluster
13:40 kshlm joined #gluster
13:40 kotreshhr left #gluster
13:43 ppai joined #gluster
13:44 hamiller joined #gluster
13:48 julim joined #gluster
13:56 Anjana joined #gluster
13:59 Twistedgrim joined #gluster
14:02 Anjana1 joined #gluster
14:04 jobewan joined #gluster
14:10 lalatenduM__ joined #gluster
14:20 aravindavk joined #gluster
14:22 wushudoin joined #gluster
14:24 kdhananjay joined #gluster
14:32 bennyturns joined #gluster
14:37 papamoose joined #gluster
14:43 cholcombe joined #gluster
14:47 jbonjean joined #gluster
14:48 atinm joined #gluster
14:48 shaunm_ joined #gluster
14:51 jbonjean hi, I am getting "pthread_createfailed: Cannot allocate memory" on my server, and a lsof shows thousands of FIFO opened. Does anybody have an idea ?
14:52 coredump joined #gluster
14:52 jbonjean the process that open these FIFO is glusterfs
14:56 glusterbot News from newglusterbugs: [Bug 1218304] Intermittent failure of basic/afr/data-self-heal.t <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218304>
14:56 lalatenduM__ joined #gluster
15:00 Leildin joined #gluster
15:13 anrao joined #gluster
15:23 nbalacha joined #gluster
15:26 kovsheni_ joined #gluster
15:29 poornimag joined #gluster
15:40 plarsen joined #gluster
15:42 kovshenin joined #gluster
15:48 kovshenin joined #gluster
15:51 kovsheni_ joined #gluster
15:52 soumya joined #gluster
15:56 kovshenin joined #gluster
16:03 theron joined #gluster
16:07 theron joined #gluster
16:14 karnan joined #gluster
16:21 liquidat joined #gluster
16:37 smohan joined #gluster
16:40 rcschool joined #gluster
16:59 Philambdo1 joined #gluster
17:13 Rapture joined #gluster
17:17 TealS joined #gluster
17:29 nsoffer joined #gluster
17:43 lalatenduM joined #gluster
17:44 shubhendu_ joined #gluster
17:47 redbeard joined #gluster
17:49 lpabon joined #gluster
18:03 rcschool joined #gluster
18:13 joseki joined #gluster
18:14 joseki hi everyone. i'm trying to backup a node, and used rsync -aAXv and somehow the amount copied over to the backup exceeds what I started with. I think perhaps something might be turning symlinks to files? what's a recommended way to make sure a brick is backed up?
18:16 hamiller joseki, I think you should back up a volume via a client, not directly from the backend brick
18:18 joseki right, i could do that
18:20 shaunm_ joined #gluster
18:24 lalatenduM_ joined #gluster
18:28 stickyboy joined #gluster
18:28 joseki i just wanted a way to completely restore a brick, i assumed I would need all the .glusterfs directories as well
18:33 lalatenduM joined #gluster
18:37 smohan_ joined #gluster
18:40 lexi2 joined #gluster
18:44 kkeithley joined #gluster
18:52 lalatenduM joined #gluster
18:53 hamiller joseki, there is 'administrative metadata' and other issues that make backup/restore much simpler if you leverage a client mount for both.
19:01 uxbod joined #gluster
19:04 ekuric joined #gluster
19:04 julim joined #gluster
19:08 Rapture joined #gluster
19:11 deniszh joined #gluster
19:27 glusterbot News from newglusterbugs: [Bug 1218400] glfs.h:46:21: fatal error: sys/acl.h: No such file or directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1218400>
19:31 scooby2 joined #gluster
19:38 sacrelege joined #gluster
19:44 shaunm_ joined #gluster
19:45 deniszh joined #gluster
19:49 joseki another question. is there information on how to properly add another node to a "distributed" cluster? I have a single node distributed cluster and want to add another machine, so about as easy as it gets. Just worried about if/when a rebalance will occur and how that will impact my data
19:50 DV joined #gluster
19:55 Pupeno_ joined #gluster
19:56 DV joined #gluster
20:16 smohan joined #gluster
20:24 VeggieMeat joined #gluster
20:25 ultrabizweb joined #gluster
20:28 owlbot joined #gluster
20:29 lexi2 joined #gluster
20:31 VeggieMeat_ joined #gluster
20:33 nsoffer joined #gluster
20:34 ultrabizweb joined #gluster
20:35 lezo joined #gluster
20:39 badone__ joined #gluster
20:52 theron joined #gluster
20:58 Rapture joined #gluster
20:59 p8952 joined #gluster
21:21 joseki hamiller: good suggestion, thanks
21:52 kmai007 joined #gluster
21:53 kmai007 Has anybody seen this that can help me interpret the matrix ? http://fpaste.org/218381/43077633/
22:11 rcschool joined #gluster
22:15 gildub joined #gluster
22:22 theron joined #gluster
23:13 Rapture joined #gluster
23:22 badone_ joined #gluster
23:24 uxbod joined #gluster
23:28 theron_ joined #gluster
23:38 kovshenin joined #gluster
23:48 Pupeno joined #gluster
23:55 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary