Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 Acinonyx joined #gluster
00:36 F2Knight joined #gluster
00:44 RameshN joined #gluster
01:06 muneerse joined #gluster
01:24 julim joined #gluster
01:29 natarej joined #gluster
01:37 Lee1092 joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 itisravi joined #gluster
01:58 harish_ joined #gluster
01:58 luizcpg joined #gluster
02:02 haomaiwang joined #gluster
02:08 plarsen joined #gluster
02:10 julim joined #gluster
02:20 nehar joined #gluster
02:33 haomaiwang joined #gluster
02:34 harish_ joined #gluster
03:01 haomaiwang joined #gluster
03:20 luizcpg joined #gluster
03:40 itisravi joined #gluster
03:56 armyriad joined #gluster
04:01 haomaiwang joined #gluster
04:13 armyriad joined #gluster
04:15 kotreshhr joined #gluster
04:20 overclk joined #gluster
04:23 aravindavk joined #gluster
04:23 harish_ joined #gluster
04:25 natarej joined #gluster
04:27 gem joined #gluster
04:29 natarej_ joined #gluster
04:36 sage joined #gluster
04:38 RameshN joined #gluster
04:38 rastar joined #gluster
04:41 sakshi joined #gluster
04:41 harish_ joined #gluster
04:41 atinm joined #gluster
04:52 ppai joined #gluster
04:53 natarej joined #gluster
04:56 xMopxShell joined #gluster
04:56 shubhendu joined #gluster
04:57 gowtham_ joined #gluster
04:57 jiffin joined #gluster
04:59 nehar joined #gluster
05:01 haomaiwang joined #gluster
05:01 nbalacha joined #gluster
05:06 gem joined #gluster
05:06 gem_ joined #gluster
05:10 spalai joined #gluster
05:12 aspandey joined #gluster
05:13 ndarshan joined #gluster
05:16 btpier joined #gluster
05:17 rafi joined #gluster
05:17 Manikandan joined #gluster
05:20 rafi1 joined #gluster
05:20 Apeksha joined #gluster
05:23 ashiq joined #gluster
05:31 hgowtham joined #gluster
05:32 poornimag joined #gluster
05:36 kotreshhr joined #gluster
05:38 Bhaskarakiran joined #gluster
05:39 natarej_ joined #gluster
05:46 DV_ joined #gluster
05:54 atalur joined #gluster
05:55 PaulCuzner left #gluster
05:56 karthik___ joined #gluster
05:57 spalai joined #gluster
06:01 haomaiwang joined #gluster
06:02 kovshenin joined #gluster
06:03 hagarth joined #gluster
06:05 Saravanakmr joined #gluster
06:12 deniszh joined #gluster
06:12 MikeLupe joined #gluster
06:14 robb_nl joined #gluster
06:15 jtux joined #gluster
06:19 prasanth joined #gluster
06:22 Wizek_ joined #gluster
06:33 harish_ joined #gluster
06:40 hackman joined #gluster
06:45 gem joined #gluster
06:45 kdhananjay joined #gluster
06:48 nishanth joined #gluster
06:49 karnan joined #gluster
06:49 d0nn1e joined #gluster
06:54 hagarth joined #gluster
06:56 hchiramm joined #gluster
06:58 karnan joined #gluster
07:01 haomaiwang joined #gluster
07:07 deniszh1 joined #gluster
07:11 pur__ joined #gluster
07:11 deniszh joined #gluster
07:12 ItsMe` joined #gluster
07:14 rastar joined #gluster
07:15 ctria joined #gluster
07:16 jri joined #gluster
07:18 ivan_rossi joined #gluster
07:20 fsimonce joined #gluster
07:36 [Enrico] joined #gluster
07:40 Javezim joined #gluster
07:41 Javezim Hey All, We are having an issue with a current Gluster setup. Basically we have a Distrubuted-Replicate over 12 Bricks. When Data is written to the cluster, I'm not sure if its dropping out or something with the replicate, but one version of the file will be fine and another will be older date and smaller size.
07:41 Javezim This of course then means its in Split Brain
07:41 Javezim But what I am trying to figure out is why it stops replicating the file at one point so it is bad on one of the bricks.
07:42 Javezim Are there any options for gluster volumes to get better performance with the replicate. So that is doesn't drop or stop at any point and leave the file behind
07:43 auzty joined #gluster
07:47 rastar joined #gluster
07:50 ramky joined #gluster
07:54 pur__ joined #gluster
08:01 haomaiwang joined #gluster
08:04 Telsin joined #gluster
08:05 arcolife joined #gluster
08:10 anil joined #gluster
08:11 Slashman joined #gluster
08:15 Telsin left #gluster
08:19 paul98 joined #gluster
08:25 jri joined #gluster
08:27 hybrid512 joined #gluster
08:27 hybrid512 joined #gluster
08:28 karnan joined #gluster
08:40 ahino joined #gluster
08:44 karthik___ joined #gluster
08:49 ndarshan joined #gluster
08:53 Ulrar Any news on the 3.7.12 release ? Did you decide on a date ? We had a huge crash tonight, everyone is getting impatient here :(
09:01 haomaiwang joined #gluster
09:03 hackman joined #gluster
09:05 atinm Ulrar, it will be out early this month
09:05 atinm Ulrar, could you share the crashes in the ML?
09:06 Ulrar atinm: Sorry, wasn't clear, the crash isn't gluster related. We are using 3.7.6 in production currently, and the heals are freezing all the VMs for 30 to 60 minutes
09:07 Ulrar So all the clients are pretty mad when it happens
09:07 atinm Ulrar, if you can provide more details around kdhananjay can help you I believe
09:08 Ulrar atinm: Yeah we already talked, got me a tarball of the current dev version, seems to solve everything
09:08 Ulrar That's why I'm waiting for 3.7.12 :)
09:08 atinm uebera||, excellent
09:08 atinm Ulrar, ^^
09:08 atinm Ulrar, so appreciate your patience for few more days :)
09:09 Ulrar Sure, no problem. I was wondering if it was days or months
09:09 Ulrar glad it's days :D
09:09 Ulrar thanks !
09:12 uebera|| atinm: yw ;)
09:38 nishanth joined #gluster
09:46 DV_ joined #gluster
09:47 [Enrico] joined #gluster
09:49 ahino joined #gluster
09:50 aphorise Whats the Mbit throughput I can expect in a 8 / 12 node makeup? - I couldnt find anything more than http://blog.gluster.org/category/performance/
10:01 haomaiwang joined #gluster
10:03 julim joined #gluster
10:16 deniszh joined #gluster
10:19 lsde joined #gluster
10:20 jocke- joined #gluster
10:22 lsde hi, is compiling on OSX still supported? i haven't been able to build it with 10.11..complains about :
10:22 lsde './syscall.h:126:61: error: array has incomplete element type 'const struct timeval''
10:22 lsde gcc is clang LLVM 7.3
10:22 lsde same with gcc-5.3.0
10:23 lsde i also had to manually add define for GF_XATTR_NAME_MAX to compat.h as it wasn't detected for darwin
10:34 edong23 joined #gluster
10:44 atinm joined #gluster
11:01 haomaiwang joined #gluster
11:06 dlambrig_ joined #gluster
11:15 muneerse2 joined #gluster
11:19 arcolife joined #gluster
11:25 MikeLupe2 joined #gluster
11:33 ahino1 joined #gluster
11:41 ppai joined #gluster
11:43 ecoreply_ joined #gluster
11:46 claude_ joined #gluster
11:46 atinm joined #gluster
11:47 kotreshhr left #gluster
11:48 johnmilton joined #gluster
11:49 prasanth joined #gluster
11:58 bluenemo joined #gluster
12:01 haomaiwang joined #gluster
12:03 rafi1 REMINDER Gluster community started on gluster-meeting
12:03 anil left #gluster
12:07 unclemarc joined #gluster
12:12 dlambrig_ left #gluster
12:19 [Enrico] joined #gluster
12:20 karnan joined #gluster
12:26 nehar joined #gluster
12:31 MikeLupe joined #gluster
12:38 nehar joined #gluster
12:39 plarsen joined #gluster
12:53 arcolife joined #gluster
13:00 hi11111 joined #gluster
13:00 rwheeler joined #gluster
13:01 haomaiwang joined #gluster
13:02 nehar joined #gluster
13:12 karnan joined #gluster
13:21 Manikandan joined #gluster
13:21 mpietersen joined #gluster
13:22 ashiq joined #gluster
13:24 RameshN joined #gluster
13:26 guhcampos joined #gluster
13:26 ahino joined #gluster
13:28 skylar joined #gluster
13:32 luis_silva joined #gluster
13:33 nbalacha joined #gluster
13:35 hagarth joined #gluster
13:37 gowtham_ joined #gluster
13:38 luis_silva jiffin: Thanks, I also notice that this is fixed in newer versions so upgrading might be the way to go here.
13:39 gem joined #gluster
13:41 jiffin luis_silva: np
13:41 jiffin lsde: I guess kkeithley can help u out
13:42 hchiramm joined #gluster
13:44 lsde jiffin: thanks
13:46 lsde kkeithley:
13:46 lsde 12:22:02 - lsde: hi, is compiling on OSX still supported? i haven't been able to build it with 10.11..complains about :
13:46 lsde 12:22:02 - lsde: './syscall.h:126:61: error: array has incomplete element type 'const struct timeval''
13:46 lsde 12:22:02 - lsde: gcc is clang LLVM 7.3
13:46 lsde 12:22:02 - lsde: same with gcc-5.3.0
13:46 lsde 12:23:35 - lsde: i also had to manually add define for GF_XATTR_NAME_MAX to compat.h as it wasn't detected for darwin
13:47 kkeithley lsde: that would have been a good question for the Community Meeting an hour ago
13:48 kkeithley I would not say it's not supported.  There needs to be a champion for keeping it alive and supported.
13:49 kkeithley I presume you're refering to 3.7.11 on El Cap.
13:50 kkeithley or 3.8.0rc2
13:50 luizcpg joined #gluster
13:51 lsde kkeithley: i was building from master, precisely e341d282
13:52 kkeithley ah, okay
13:53 kkeithley hagarth, amye: do you guys+gals want to weigh in our support building on OS X.
13:54 lsde kkeithley: i`ll just try to build it from v3.7.11 and let's see
13:55 hagarth kkeithley: I would like gluster to at least compile on OS X. With lack of effective maintainers and OS X hardware, it is difficult for us to support that port.
13:56 shaunm joined #gluster
13:58 kkeithley that would be an interesting data point.  I honestly can't remember the last time I personally built on OS X.  Looks like I tried something back in December.
13:59 hagarth kkeithley: if someone can provide a OSX jenkins slave it would be helpful to accomplish this goal ;)
13:59 lsde kkeithley: v3.7.11 fails on differenc error:
13:59 lsde In file included from ../../contrib/timer-wheel/timer-wheel.c:24:
13:59 lsde ../../contrib/timer-wheel/timer-wheel.h:42:9: error: unknown type name 'pthread_spinlock_t'; did you mean 'pthread_rwlock_t'?
13:59 lsde pthread_spinlock_t lock;      /* base lock */
13:59 lsde ^~~~~~~~~~~~~~~~~~
13:59 lsde pthread_rwlock_t
14:00 kkeithley spinlock was a recent addtion to 3.7 IIRC
14:00 kkeithley I can't reach the Mac we have here in our lab. I doubt it has been updated to El Capitan
14:02 ira joined #gluster
14:02 jiffin1 joined #gluster
14:04 kkeithley nigelb or someone mentioned putting a MacMini into the data center in order to have it available for jenkins
14:05 nigelb Not me for sure :)
14:06 misc likely me, but for that, we should already have the existing hardware first
14:09 kkeithley lsde: anyway.....   send patches!
14:10 kkeithley misc: I can extract the Mac Mini from the lab here, but then what. Getting stuff into the DC seems to be a NPhard problem.
14:11 misc kkeithley: I know..
14:12 lsde kkeithley: you guys use gerrit right?
14:12 kkeithley yes, we use gerrit, it's at  review.gluster.org
14:13 lsde kkeithley: cool, i'll drop some lines there then.
14:13 kkeithley thanks
14:13 kkeithley lsde++
14:13 glusterbot kkeithley: lsde's karma is now 1
14:19 lsde kkeithley: in case you need an osx machine for jenkins slave, i have a spare old macbook i use only occasionally for media re-encoding, located in CZ on 100/10 mbit line.
14:20 ivan_rossi left #gluster
14:21 kkeithley nigelb, misc: ^^^   okay, we'll keep it in mind.
14:25 kkeithley we do have a Mac Mini. ATM it's not in a convenient place for jenkins to access it
14:27 hagarth nigelb: was there an issue with gerrit web listing all patch owners as anonymous cowards sometime back?
14:29 nigelb hagarth: define some time back?
14:29 hagarth nigelb: ~2 hours
14:30 nigelb Not that I know of.
14:30 nigelb we are, however, having issues post the gerrit upgrae.
14:30 hagarth nigelb: trying to ascertain if it was a real issue or my coffee deprived morning that caused the problem
14:30 nigelb *upgrade
14:30 hagarth nigelb: cannot observe it now though
14:30 nigelb If your github username does not match your gerrit account, you're in trouble.
14:30 nigelb (Please don't logout if that's the case)
14:31 hagarth nigelb: ah ok
14:31 nigelb my email to -devel and -infra has details.
14:32 hagarth nigelb: will check, thank you! in my case the usernames happen to be the same.
14:32 haomaiwang joined #gluster
14:32 guhcampos joined #gluster
14:32 haomaiwang joined #gluster
14:44 kpease joined #gluster
14:53 spalai joined #gluster
14:53 spalai left #gluster
14:54 spalai joined #gluster
14:54 spalai joined #gluster
14:57 Javezim Hey All, We are having an issue with a current Gluster setup. Basically we have a Distrubuted-Replicate over 12 Bricks. When Data is written to the cluster, I'm not sure if its dropping out or something with the replicate, but one version of the file will be fine and another will be older date and smaller size.
14:57 Javezim This of course then means its in Split Brain
14:57 Javezim But what I am trying to figure out is why it stops replicating the file at one point so it is bad on one of the bricks.
14:57 Javezim Are there any options for gluster volumes to get better performance with the replicate. So that is doesn't drop or stop at any point and leave the file behind
14:59 bluenemo joined #gluster
15:01 haomaiwang joined #gluster
15:02 Manikandan joined #gluster
15:07 kotreshhr joined #gluster
15:07 kotreshhr left #gluster
15:09 hagarth joined #gluster
15:13 wushudoin joined #gluster
15:15 arcolife joined #gluster
15:20 stormi left #gluster
15:26 Lee1092 joined #gluster
15:28 JoeJulian Javezim: replication happens in the client. The client writes to each replica simultaneously. If that's not happening, check you firewalls.
15:32 jiffin joined #gluster
15:42 spalai left #gluster
15:44 MikeLupe joined #gluster
15:48 MikeLupe2 joined #gluster
15:48 arcolife joined #gluster
15:49 MikeLupe3 joined #gluster
15:55 jiffin joined #gluster
15:58 MikeLupe4 joined #gluster
15:59 MikeLupe5 joined #gluster
16:01 haomaiwang joined #gluster
16:03 nathwill joined #gluster
16:05 guhcampos joined #gluster
16:08 arcolife joined #gluster
16:09 shyam joined #gluster
16:09 jiffin1 joined #gluster
16:09 MikeLupe6 joined #gluster
16:10 Debloper joined #gluster
16:12 MikeLupe7 joined #gluster
16:13 MikeLupe8 joined #gluster
16:14 MikeLupe9 joined #gluster
16:20 ashiq joined #gluster
16:23 haomaiwang joined #gluster
16:26 jiffin1 joined #gluster
16:30 F2Knight joined #gluster
16:33 Gnomethrower joined #gluster
16:35 MikeLupe10 joined #gluster
16:46 DV_ joined #gluster
16:47 spalai joined #gluster
16:58 guhcampos joined #gluster
17:03 MikeLupe11 joined #gluster
17:11 ira joined #gluster
17:13 jiffin1 joined #gluster
17:14 ahino joined #gluster
17:30 MikeLupe11 joined #gluster
17:38 spalai left #gluster
17:43 mpietersen joined #gluster
17:46 skylar joined #gluster
17:46 hackman joined #gluster
17:50 hagarth joined #gluster
17:50 spalai joined #gluster
17:51 shyam joined #gluster
17:58 Telsin joined #gluster
18:00 Telsin left #gluster
18:11 haomaiwang joined #gluster
18:15 primehaxor joined #gluster
18:38 hagarth joined #gluster
18:42 chirino joined #gluster
19:02 mpietersen joined #gluster
19:04 mpietersen joined #gluster
19:06 mpietersen joined #gluster
19:07 mpietersen joined #gluster
19:10 mpietersen joined #gluster
19:13 mpietersen joined #gluster
19:13 guhcampos joined #gluster
19:18 hagarth joined #gluster
19:20 deniszh joined #gluster
19:24 Pupeno joined #gluster
19:34 Telsin joined #gluster
19:43 ctria joined #gluster
20:04 guhcampos joined #gluster
20:15 amye joined #gluster
20:26 ira joined #gluster
20:30 guhcampos joined #gluster
20:31 shaunm joined #gluster
20:55 papamoose joined #gluster
20:59 johnmilton joined #gluster
20:59 Pupeno joined #gluster
21:06 dnunez joined #gluster
21:10 deniszh joined #gluster
21:14 Pupeno joined #gluster
21:49 Pupeno joined #gluster
22:08 luizcpg joined #gluster
22:20 MikeLupe11 joined #gluster
22:32 johnmilton joined #gluster
22:34 MikeLupe11 hi - I got back trying to add a second drive to each of my r3a1 setup. I've given up the idea of growing the volume by fiddeling with lvm & co. I'll try to "locally" distribute to a second brick. But I'm not sure how... do I have to change the volume to repliated/distributed? And how to I add this 2nd brick for this volume to the arbiter disk?
22:34 MikeLupe11 s/r3a1 setup/r3a1 nodes
22:38 PaulCuzner joined #gluster
22:38 MikeLupe11 The docs show how to add a brick on a new node, adding another disk locally is something I can't figure out :/
22:38 MikeLupe11 http://www.gluster.org/community/documentation/index.php/Gluster_3.1:_Expanding_Volumes
22:38 JoeJulian There's no difference
22:38 PaulCuzner left #gluster
22:39 JoeJulian You're adding a brick, regardless of its location.
22:40 MikeLupe11 ok - but how do I address it? And is it enough to simply create a "brick2" folder on the mounted disk?
22:40 MikeLupe11 For example, to add server4:/exp4 to test-volume:
22:40 MikeLupe11 # gluster volume add-brick test-volume server4:/exp4
22:40 MikeLupe11 That was pasted from the community docs.
22:41 MikeLupe11 I'm a bit scary, don't want to bust my setup, sorry
22:42 MikeLupe11 Entire ovirt/gluster setup almost ready to take "production" status (on a private setup)
22:42 JoeJulian What's the name of your server and the path of the new brick?
22:43 MikeLupe11 I haven't got a new brick yet, that's my beginning problem. Where should I create the directory needed for that brick...
22:44 MikeLupe11 The problem is, in ovirt I'm able to create new bricks on the partition of the new disk. But not on the 3rd arbiter.....So that confused me a bit.
22:44 JoeJulian Anywhere. I prefer /data/gluster/$volume/brickN some prefer under /srv.
22:45 drue joined #gluster
22:45 Javezim @JoeJulian Thing is it does begin writing, but it must stop at some point because one of the replication files get left behind
22:45 Javezim The files in question here are metadata files for a backup
22:46 Javezim and are being read/written constantly for an hour or two
22:46 Javezim at some point something happens and one bricks file will be 200MB and Dated June 2nd 00:05:00AM and the other brick will be 100MB Dated 1st June 10:00:00PM for example
22:48 JoeJulian My guess is network or load problem. Look in the client log.
22:51 MikeLupe11 - /data/gluster is already here. And I think I know what my problem is... I must realize that I have to create a link from /gluster/data/brick2 to my new drive, haven't I?
22:51 JoeJulian No, sylinks will not work.
22:52 MikeLupe11 ok, I'm getting closer
22:52 MikeLupe11 so how do I place "brick2" on my new disk? Have to use anoter folder in that case..?
22:53 MikeLupe11 another than /gluster/data/brick2
22:58 MikeLupe11 I'm sorry it's confusing - confusing mostly for me..
22:58 MikeLupe11 I'l try again... I would like to add the new diskdrive to this volume:
22:58 MikeLupe11 Brick1: slp-ovirtnode-01.corp.domain.tld:/gluster/data/brick1
22:58 MikeLupe11 Brick2: slp-ovirtnode-02.corp.domain.tld:/gluster/data/brick1
22:58 MikeLupe11 Brick3: slp-ovirtnode-03.corp.domain.tld:/gluster/data/brick1 (arbiter)
23:00 MikeLupe11 And here's the mount for /gluster/data:
23:00 MikeLupe11 - /dev/mapper/gluster_vg1-data /gluster/data
23:01 MikeLupe11 So whatever I place in /gluster/data will be on the old disk....
23:01 d0nn1e joined #gluster
23:01 haomaiwang joined #gluster
23:03 MikeLupe11 That means, if I want to use the /gluster/data structure, I must fiddle with lvm volume groups etc, is that right?
23:03 MikeLupe11 I bang my head to the wall and still can't figure it out....damn sorry
23:04 JoeJulian brb... conf call
23:04 MikeLupe11 oky, sry
23:05 amye joined #gluster
23:06 plarsen joined #gluster
23:49 deangiberson joined #gluster
23:53 deangiberson Can anyone suggest a suggest a method to get the name of any object from a glfs_fd_t *?
23:58 Norky joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary