Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:16 JoeJulian FYI: Just got the 3.7.9 release email. Packages should be built shortly.
00:16 johnmilton joined #gluster
00:16 post-factum nyan
00:17 post-factum would like to wonder whether they've merged all memleak-related patches
00:19 JoeJulian bug 1309567 was the tracking bug for this release
00:19 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1309567 unspecified, unspecified, ---, vbellur, ASSIGNED , Tracker for glusterfs-3.7.9
00:20 post-factum yup, couldn't find "Fix for rpc_transport_t leak"
00:20 post-factum http://review.gluster.org/#/c/13456/
00:20 glusterbot Title: Gerrit Code Review (at review.gluster.org)
00:20 post-factum marked as merged
00:20 post-factum hmm
00:21 post-factum merged but not backported?
00:21 post-factum this is weird :(
00:25 post-factum everything else seems to be fixed
00:29 a2 joined #gluster
00:35 jvandewege joined #gluster
00:40 jvandewege joined #gluster
00:49 plarsen joined #gluster
00:57 harish_ joined #gluster
01:01 haomaiwang joined #gluster
01:11 ggarg joined #gluster
01:27 EinstCrazy joined #gluster
01:28 EinstCrazy joined #gluster
01:38 92AAAKX7M joined #gluster
01:40 nangthang joined #gluster
01:54 baojg joined #gluster
02:07 harish_ joined #gluster
02:08 atrius joined #gluster
02:15 ahino joined #gluster
02:16 csaba joined #gluster
02:17 lkoranda joined #gluster
02:20 haomaiwa_ joined #gluster
02:31 atinm joined #gluster
02:46 haomaiwa_ joined #gluster
02:52 ggarg joined #gluster
03:01 haomaiwa_ joined #gluster
03:03 sakshi joined #gluster
03:12 atinm joined #gluster
03:16 nishanth joined #gluster
03:22 EinstCra_ joined #gluster
03:52 nbalacha joined #gluster
03:59 itisravi joined #gluster
04:01 haomaiwa_ joined #gluster
04:01 RameshN joined #gluster
04:05 shubhendu joined #gluster
04:16 calavera joined #gluster
04:23 overclk joined #gluster
04:27 anmol joined #gluster
04:40 baojg joined #gluster
04:43 ramky joined #gluster
04:47 jiffin joined #gluster
04:55 JPaul joined #gluster
04:57 ndarshan joined #gluster
04:58 ovaistariq joined #gluster
04:59 ovaistar_ joined #gluster
04:59 pur joined #gluster
05:01 haomaiwa_ joined #gluster
05:02 hgowtham joined #gluster
05:02 prasanth joined #gluster
05:02 glisignoli joined #gluster
05:04 aravindavk joined #gluster
05:06 kanagaraj joined #gluster
05:07 ramky joined #gluster
05:07 nehar joined #gluster
05:12 nishanth joined #gluster
05:12 jbrooks joined #gluster
05:16 geniusoftime Hello, can someone please explain to me this ganesha-ha.conf variable?
05:19 gowtham joined #gluster
05:22 kdhananjay joined #gluster
05:22 anoopcs jiffin, ^^
05:24 kanagaraj_ joined #gluster
05:31 kovshenin joined #gluster
05:33 Apeksha joined #gluster
05:35 kshlm joined #gluster
05:40 foster joined #gluster
05:44 Manikandan joined #gluster
05:47 ppai joined #gluster
05:50 Saravanakmr joined #gluster
05:50 tom[] joined #gluster
05:53 calavera joined #gluster
05:54 poornimag joined #gluster
05:56 deniszh joined #gluster
06:01 haomaiwa_ joined #gluster
06:01 arcolife joined #gluster
06:03 kanagaraj joined #gluster
06:09 kanagaraj_ joined #gluster
06:11 atalur joined #gluster
06:14 spalai joined #gluster
06:20 ashiq joined #gluster
06:22 ramky joined #gluster
06:22 karnan joined #gluster
06:22 kanagaraj joined #gluster
06:24 nehar joined #gluster
06:29 kanagaraj_ joined #gluster
06:34 arcolife joined #gluster
06:38 kanagaraj__ joined #gluster
06:42 liibert joined #gluster
06:47 hchiramm joined #gluster
06:49 anil joined #gluster
06:51 kanagaraj joined #gluster
06:53 mhulsman joined #gluster
06:59 vmallika joined #gluster
07:01 haomaiwa_ joined #gluster
07:03 spalai joined #gluster
07:03 mbukatov joined #gluster
07:12 nangthang joined #gluster
07:14 deniszh1 joined #gluster
07:14 baojg joined #gluster
07:14 kanagaraj_ joined #gluster
07:17 hchiramm joined #gluster
07:19 ramky_ joined #gluster
07:19 jtux joined #gluster
07:30 rastar joined #gluster
07:34 kanagaraj__ joined #gluster
07:41 [Enrico] joined #gluster
07:43 unlaudable joined #gluster
08:01 haomaiwa_ joined #gluster
08:01 d-fence joined #gluster
08:17 jri joined #gluster
08:20 fsimonce joined #gluster
08:21 kanagaraj joined #gluster
08:22 ramky joined #gluster
08:25 ivan_rossi joined #gluster
08:27 deniszh joined #gluster
08:28 robb_nl joined #gluster
08:33 mbukatov joined #gluster
08:40 robb_nl joined #gluster
08:49 mzink_gone joined #gluster
08:51 ctria joined #gluster
09:01 haomaiwa_ joined #gluster
09:04 anmol joined #gluster
09:06 kdhananjay joined #gluster
09:15 lbednar joined #gluster
09:18 DV joined #gluster
09:18 lbednar Hello here, I would like to use gluster version 3.7.1, so I download http://download.gluster.org/pub/gluster/gl​usterfs/3.7/3.7.1/RHEL/glusterfs-epel.repo repository on my systems, but when I open this file it points to LATEST instead of 3.7.1. is this on the purpose?
09:19 Slashman joined #gluster
09:20 nangthang joined #gluster
09:26 post-factum lbednar: what is the rationale to use out-of-date version?
09:26 deniszh joined #gluster
09:29 lbednar post-factum: because other components in my infra requires this version.
09:32 haomaiwa_ joined #gluster
09:33 lbednar post-factum: what is the reason to serve LATEST repository under 3.7.1 link ? (: I see it is not only for RHEL but also for fedora, centos as well
09:33 post-factum because it is 3.7 repo, i guess, and it always points to the most recent release
09:33 post-factum you may download desired rpms by hand, however
09:37 pjrebollo joined #gluster
09:37 lbednar post-factum: yes, I can either replace s/LATEST/3.7.1/ ... I just wanted to know whether it is on the purpose or not.
09:44 hackman joined #gluster
09:47 spalai joined #gluster
09:49 kdhananjay joined #gluster
09:53 baojg joined #gluster
09:55 anil joined #gluster
09:56 ivan_rossi1 joined #gluster
10:01 haomaiwang joined #gluster
10:03 jri_ joined #gluster
10:06 kdhananjay1 joined #gluster
10:16 ahino joined #gluster
10:16 lbednar left #gluster
10:20 Gnomethrower joined #gluster
10:21 jermudgeon joined #gluster
10:32 ovaistariq joined #gluster
10:40 anil joined #gluster
10:50 pjrebollo joined #gluster
10:58 haomaiwang joined #gluster
11:01 haomaiwa_ joined #gluster
11:06 Wizek joined #gluster
11:07 Wizek joined #gluster
11:23 muneerse2 joined #gluster
11:27 ira joined #gluster
11:28 mbukatov joined #gluster
11:29 XpineX joined #gluster
11:40 chirino joined #gluster
11:44 arcolife joined #gluster
11:47 jri joined #gluster
11:48 ggarg joined #gluster
11:50 armyriad joined #gluster
12:00 plarsen joined #gluster
12:01 mhulsman joined #gluster
12:15 XpineX joined #gluster
12:18 kanagaraj_ joined #gluster
12:19 robb_nl joined #gluster
12:21 kanagaraj_ joined #gluster
12:22 DV joined #gluster
12:26 unclemarc joined #gluster
12:32 deniszh joined #gluster
12:33 ovaistariq joined #gluster
12:35 nbalacha joined #gluster
12:35 RameshN joined #gluster
12:37 shubhendu joined #gluster
12:41 spalai left #gluster
12:52 sakshi joined #gluster
12:55 kanagaraj joined #gluster
13:03 RameshN joined #gluster
13:05 deniszh joined #gluster
13:13 sabansal_ joined #gluster
13:18 deniszh joined #gluster
13:25 EinstCrazy joined #gluster
13:26 EinstCrazy joined #gluster
13:28 mpietersen joined #gluster
13:30 mpietersen joined #gluster
13:35 muneerse joined #gluster
13:37 ws2k33 joined #gluster
13:39 spalai joined #gluster
13:39 spalai left #gluster
13:42 hamiller joined #gluster
13:45 shaunm joined #gluster
13:55 haomaiwang joined #gluster
13:59 haomaiwa_ joined #gluster
14:01 haomaiwa_ joined #gluster
14:03 deniszh joined #gluster
14:13 chirino joined #gluster
14:17 EinstCra_ joined #gluster
14:23 atalur joined #gluster
14:27 kovshenin joined #gluster
14:33 baojg joined #gluster
14:35 ovaistariq joined #gluster
14:38 skylar joined #gluster
14:47 farhorizon joined #gluster
14:49 marlinc I'm not sure whether I'm the right track or not. Could I use GlusterFS to have multiple file servers in different offices to server home directory for users? The idea is that people can modify files at all locations and the changes get synced to the others
14:52 marlinc I don't think i am
15:01 Manikandan joined #gluster
15:01 haomaiwang joined #gluster
15:08 hchiramm joined #gluster
15:09 JoeJulian marlinc: Unfortunately, until we can have quantum entagled networks with 0 latency despite distance, there is no perfect solution to that.
15:12 klfwip joined #gluster
15:13 klfwip Do gluster bricks always replicate adjacent: Like with replica 2 would brick1 mirror brick2, and brick99 mirror brick100 ?
15:13 klfwip Or can the configuration be more complex
15:13 coredump joined #gluster
15:13 JoeJulian @brick order
15:13 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
15:13 klfwip Haha, I thought that might be a common question yes. Just couldn't find it on google so far.
15:13 JoeJulian :)
15:14 calavera joined #gluster
15:18 ppai joined #gluster
15:20 EinstCrazy joined #gluster
15:25 shubhendu joined #gluster
15:30 baojg joined #gluster
15:35 johnmilton joined #gluster
15:56 nbalacha joined #gluster
16:01 haomaiwa_ joined #gluster
16:02 juhaj joined #gluster
16:04 harish joined #gluster
16:19 overclk joined #gluster
16:22 nbalacha joined #gluster
16:23 jiffin joined #gluster
16:26 robb_nl joined #gluster
16:29 d0nn1e joined #gluster
16:29 DV joined #gluster
16:35 chirino_m joined #gluster
16:36 farhoriz_ joined #gluster
16:38 Wizek joined #gluster
16:53 ppai joined #gluster
16:54 B21956 joined #gluster
16:55 luizcpg_ joined #gluster
16:56 luizcpg_ Hi
16:57 glusterbot luizcpg_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:57 luizcpg_ # gluster snapshot create engine_snapshot_v1 engine
16:57 luizcpg_ snapshot create: failed: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of engine are thinly provisioned LV.
16:57 luizcpg_ Snapshot command failed
16:57 luizcpg_ I’m facing an weird issue when trying to make a snapshot .
16:58 luizcpg_ glusterfs-api-3.7.8-4.el7.x86_64
16:58 luizcpg_ glusterfs-server-3.7.8-4.el7.x86_64
16:58 luizcpg_ glusterfs-3.7.8-4.el7.x86_64
16:58 luizcpg_ vdsm-gluster-4.17.23-1.el7.noarch
16:58 luizcpg_ glusterfs-cli-3.7.8-4.el7.x86_64
16:58 luizcpg_ glusterfs-client-xlators-3.7.8-4.el7.x86_64
16:58 luizcpg_ glusterfs-fuse-3.7.8-4.el7.x86_64
16:58 luizcpg_ glusterfs-libs-3.7.8-4.el7.x86_64
16:58 luizcpg_ glusterfs-geo-replication-3.7.8-4.el7.x86_64
16:58 luizcpg_ I’m using the latest version of gluster
16:58 luizcpg_ Anyone knows what might be happening ?
16:59 Norky I'll ask the obvious question, are your bricks made from thin LVs?
17:01 haomaiwa_ joined #gluster
17:02 luizcpg_ lvcreate -n vmos1 -l 100%FREE gluster_vg1
17:02 luizcpg_ I’ve used the cmd above
17:03 hackman joined #gluster
17:04 Norky then no, your LVs are not thin provisioned. You cannot use Glsuter snapshotting.
17:04 Norky http://blog.gluster.org/2014/10​/gluster-volume-snapshot-howto/
17:04 glusterbot Title: Gluster Volume Snapshot Howto | Gluster Community Website (at blog.gluster.org)
17:05 RameshN joined #gluster
17:05 luizcpg_ Can I convert an existing lvm into a thinly provisioned one ?
17:05 luizcpg_ Is it risk ?
17:06 Norky I don't think so, no.
17:08 JoeJulian As an aside, luizcpg_, if you could please use a ,,(paste) service when sharing more than three lines, that's considered good IRC etiquette.
17:08 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
17:14 RameshN_ joined #gluster
17:23 klfwip `sudo gluster volume statedump <VOLUME> <OPTIONS>` dies with the cryptic error `volume statedump: failed: Commit failed on <arbitrary server>. Please check log file for details.` - and the logs say error while parsing the statedump options but little else.
17:23 klfwip It turns out the issue is that you must midir /var/run/gluster
17:23 squizzi_ joined #gluster
17:28 bennyturns joined #gluster
17:33 ivan_rossi1 left #gluster
17:37 hagarth joined #gluster
17:58 farhorizon joined #gluster
18:01 haomaiwang joined #gluster
18:04 squizzi joined #gluster
18:04 calavera joined #gluster
18:24 ggarg joined #gluster
18:25 pjrebollo joined #gluster
18:55 robb_nl joined #gluster
19:01 haomaiwa_ joined #gluster
19:03 chirino_ joined #gluster
19:06 ovaistariq joined #gluster
19:17 puiterwijk joined #gluster
19:22 liibert joined #gluster
19:31 luizcpg_ joined #gluster
19:38 Slashman joined #gluster
19:51 shaunm joined #gluster
20:01 haomaiwang joined #gluster
20:11 DV joined #gluster
20:15 calavera joined #gluster
20:18 chirino_m joined #gluster
20:20 Telsin joined #gluster
20:35 luizcpg_ Hi, I’m using tar in order to save all gluster replica 3 data
20:35 luizcpg_ this way [root@gluster1 bkp]# tar cvpf vmos1.tar /gluster/vmos1
20:36 luizcpg_ /dev/mapper/gluster_vg1-vmos1   673G   36G  638G   6% /gluster/vmos1
20:36 luizcpg_ therefore , the total space used of 38G
20:37 luizcpg_ The process it’s running and
20:37 luizcpg_ 64G Mar 18 20:37 vmos1.tar
20:37 luizcpg_ ^ look the size of the tar file
20:37 luizcpg_ how is it possible ?
20:38 luizcpg_ any idea ?
20:41 coredump joined #gluster
20:47 gbox joined #gluster
20:50 JoeJulian luizcpg_: sparse files. See this https://gist.github.com/joe​julian/538068155756cc552ac3
20:50 glusterbot Title: gist:538068155756cc552ac3 · GitHub (at gist.github.com)
20:51 JoeJulian To have tar attempt to recognize the holes in a file, use `--sparse' ( `-S' ).
20:51 glusterbot JoeJulian: `'s karma is now 0
20:51 JoeJulian '++
20:51 glusterbot JoeJulian: ''s karma is now -3
20:52 lanning joined #gluster
21:01 luizcpg_ ok. I’ll take a look
21:01 luizcpg_ thx
21:01 haomaiwa_ joined #gluster
21:04 hagarth joined #gluster
21:22 post-fac1um joined #gluster
21:25 renout_a` joined #gluster
21:25 johnmark joined #gluster
21:26 Kins_ joined #gluster
21:26 rideh joined #gluster
21:26 dastar joined #gluster
21:30 swebb joined #gluster
22:01 haomaiwang joined #gluster
22:10 bennyturns joined #gluster
22:25 deniszh joined #gluster
22:28 luizcpg_ joined #gluster
22:36 hagarth joined #gluster
22:40 farhorizon joined #gluster
22:46 ovaistariq joined #gluster
22:47 squizzi joined #gluster
22:49 calavera joined #gluster
22:50 coredump joined #gluster
23:01 haomaiwa_ joined #gluster
23:15 gbox Does gluster only initiate selfheal upon file access?
23:15 JoeJulian no
23:17 gbox I have a bunch of file it seems to "Skip conservative merge" on, but then an "ls" of the file initiates selfheal.
23:17 gbox I'm probably just confused at this point but I did "gluster volume heal VOLUMENAME"
23:18 JoeJulian As changes are made indexes are updated on the servers marking a write (or other change) as pending or completed. The self-heal daemon walks that tree and initiates the heals.
23:18 JoeJulian But! What you're describing does give me a thought.
23:19 JoeJulian The self-heal queue is based on gfid. If a file hasn't been touched, the gfid and the filename will not have been associated. I suspect the ls is populating a dict that allows the heal to progress based on some additional information.
23:20 JoeJulian If I find some time, I'll see if I can produce the same issue and trace out what's happening.
23:21 gbox I could probably do it while I'm fixing this up.  Do you mean trace the code as well?
23:21 JoeJulian Not really sure yet.
23:21 JoeJulian I'd probably start with trace level logs and see what I can see.
23:21 gbox Yeah the logs are very specific about the code being called too
23:21 JoeJulian I'd also be doing this with a much smaller volume and artificially creating the gfid mismatch.
23:22 gbox Ha, yeah I thought of doing that with setfattr
23:24 gbox The Europe and India Gluster gang would be interested or possibly know this?
23:25 gbox BTW I was about to ditch gluster entirely but thanks to your help Joe I will give it a little longer
23:28 plarsen joined #gluster
23:29 luizcpg joined #gluster
23:36 JoeJulian I'm fairly sure pranithk would know.
23:36 JoeJulian And thanks. I do what I can.
23:47 Pupeno joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary