Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-12-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 theron joined #gluster
00:01 theron joined #gluster
00:03 zhangjn joined #gluster
00:05 cliluw joined #gluster
00:18 n0b0dyh3r3 joined #gluster
00:19 nangthang joined #gluster
00:38 mswart chrisc_: I have a similar situation, but currently also no solution :-(
00:39 mswart do you use the meta volume and which change detector?
00:39 jrm16020 joined #gluster
00:43 harish joined #gluster
00:55 EinstCrazy joined #gluster
00:55 zhangjn joined #gluster
01:00 ashka joined #gluster
01:15 mlncn joined #gluster
01:19 atinm joined #gluster
01:29 Lee1092 joined #gluster
02:02 nangthang joined #gluster
02:04 kshlm joined #gluster
02:05 theron joined #gluster
02:15 auzty joined #gluster
02:16 calavera joined #gluster
02:47 harish joined #gluster
02:48 atinm joined #gluster
02:55 zhangjn joined #gluster
02:58 [o__o] joined #gluster
03:00 aravindavk joined #gluster
03:25 dlambrig_ joined #gluster
03:27 dgbaley joined #gluster
03:27 nehar joined #gluster
03:27 [o__o] joined #gluster
03:30 bharata-rao joined #gluster
03:42 calavera joined #gluster
03:46 sakshi joined #gluster
03:47 kdhananjay joined #gluster
03:50 itisravi joined #gluster
03:56 nbalacha joined #gluster
04:06 kanagaraj joined #gluster
04:07 shubhendu joined #gluster
04:07 ppai joined #gluster
04:07 kotreshhr joined #gluster
04:09 gem joined #gluster
04:12 nehar joined #gluster
04:15 kovshenin joined #gluster
04:17 RameshN joined #gluster
04:25 atinm joined #gluster
04:28 jiffin joined #gluster
04:32 harish joined #gluster
04:33 ramteid joined #gluster
04:34 RedW joined #gluster
04:34 gildub joined #gluster
04:44 pppp joined #gluster
04:45 zhangjn joined #gluster
04:46 hgowtham joined #gluster
04:49 Manikandan joined #gluster
04:53 ndarshan joined #gluster
04:55 RameshN joined #gluster
04:58 gem joined #gluster
05:06 ashiq joined #gluster
05:06 amye1 joined #gluster
05:07 hgowtham joined #gluster
05:14 deepakcs joined #gluster
05:17 haomaiwa_ joined #gluster
05:25 vmallika joined #gluster
05:26 zhangjn joined #gluster
05:27 raghu joined #gluster
05:27 zhangjn joined #gluster
05:28 Apeksha joined #gluster
05:29 harish joined #gluster
05:32 calavera joined #gluster
05:39 deepakcs joined #gluster
05:39 Bhaskarakiran joined #gluster
05:41 rjoseph joined #gluster
05:46 poornimag joined #gluster
05:46 cholcombe joined #gluster
05:48 anil joined #gluster
05:53 Humble joined #gluster
05:56 rafi joined #gluster
05:59 overclk joined #gluster
06:02 hgowtham joined #gluster
06:09 Humble joined #gluster
06:10 kdhananjay joined #gluster
06:11 atalur joined #gluster
06:11 kotreshhr joined #gluster
06:13 Norky joined #gluster
06:14 kshlm joined #gluster
06:16 RedW joined #gluster
06:18 skoduri joined #gluster
06:19 Manikandan joined #gluster
06:20 m0zes joined #gluster
06:24 vmallika joined #gluster
06:32 Saravana_ joined #gluster
06:32 cholcombe joined #gluster
06:35 nbalacha joined #gluster
06:37 nangthang joined #gluster
06:39 gem joined #gluster
06:47 spalai joined #gluster
06:47 kotreshhr joined #gluster
06:47 SOLDIERz joined #gluster
06:51 kshlm joined #gluster
07:01 gem joined #gluster
07:12 mhulsman joined #gluster
07:29 overclk joined #gluster
07:30 jtux joined #gluster
07:37 Manikandan joined #gluster
07:38 kotreshhr joined #gluster
07:40 vmallika joined #gluster
07:42 owlbot` joined #gluster
08:14 zhangjn joined #gluster
08:19 ivan_rossi joined #gluster
08:19 night left #gluster
08:30 mobaer joined #gluster
08:32 Park joined #gluster
08:35 cliluw joined #gluster
08:38 fsimonce joined #gluster
08:38 [Enrico] joined #gluster
08:41 kshlm joined #gluster
08:46 ctria joined #gluster
08:52 prg3 joined #gluster
09:00 Saravana_ joined #gluster
09:04 itisravi joined #gluster
09:08 c0m0 joined #gluster
09:15 atinm joined #gluster
09:19 Slashman joined #gluster
09:20 vmallika joined #gluster
09:29 spalai joined #gluster
09:32 arcolife joined #gluster
09:32 atalur joined #gluster
09:41 Norky joined #gluster
09:45 harish joined #gluster
09:54 mpingu joined #gluster
09:55 mpingu Hello, i have a Problem with GlusterFS 3.7.6 and 3.7.4 and MySQL on a 2 Brick Volume on it. For a load of Renaming Operations i sometimes get Device or Resource Busy as Error. can you give me Hints to find the exact reason and  maybe report it as a bug
09:57 ccoffey I thought running mysql on gluster was a neddy no no.
09:58 vmallika joined #gluster
09:58 hgowtham joined #gluster
09:59 zhangjn joined #gluster
10:02 itisravi joined #gluster
10:16 mpingu Was running it good on 3.4 and 3.6 so maybe some changes in 3.7 made it worse ?
10:19 Saravana_ joined #gluster
10:19 ivan_rossi joined #gluster
10:20 spalai joined #gluster
10:24 anil joined #gluster
10:25 mpingu ccoffey yeah it is not recommended. But i want to try find the Problem
10:26 ivan_rossi left #gluster
10:27 atinm joined #gluster
10:32 pppp joined #gluster
10:35 ivan_rossi joined #gluster
10:35 skoduri joined #gluster
10:37 kdhananjay joined #gluster
10:46 rjoseph joined #gluster
10:50 poornimag joined #gluster
11:03 overclk joined #gluster
11:05 kkeithley1 joined #gluster
11:15 spalai joined #gluster
11:25 lalatend1M joined #gluster
11:46 vmallika joined #gluster
11:51 rastar joined #gluster
11:56 kotreshhr joined #gluster
12:04 rafi1 joined #gluster
12:05 kovshenin joined #gluster
12:05 poornimag joined #gluster
12:06 rjoseph joined #gluster
12:12 skoduri joined #gluster
12:14 nehar joined #gluster
12:28 skoduri joined #gluster
12:29 matclayton joined #gluster
12:30 matclayton Looking at deploying an RF3 (with 1 arbitor) cluster, and trying to estimate the size requirements of the arbitor. Any advice on how much space it should require perfile or per Meg in the main bricks?
12:36 zhangjn joined #gluster
12:37 itisravi matclayton: There's no defenitive answer yet but I gave some suggestions here https://www.gluster.org/pipermail/gluster-users/2015-September/023498.html
12:37 glusterbot Title: [Gluster-users] [Gluster-devel] AFR arbiter volumes (at www.gluster.org)
12:38 matclayton of course, just trying to get an estimate for ~?k per file or ~?k per Meg in the main bricks
12:38 matclayton 8k per file sounds like a perfect starting point
12:39 DV joined #gluster
12:39 itisravi yes, I don't think you'd store that many extended attributes on the file
12:40 matclayton Sure :)
12:40 matclayton I’m just slicing off chunks of LVM’s to allocate to this, and want to make sure its about right, we can always extend it later, but if its about right to start with it’s a good start :)
12:41 ndarshan joined #gluster
12:43 haomaiwang joined #gluster
12:45 itisravi matclayton: There was a bug uncovered recently where the arbiter brick's files could actually start storing the data, but it has been fixed via http://review.gluster.org/#/c/12479/. It should make it to glusterfs-3.7.7
12:45 glusterbot Title: Gerrit Code Review (at review.gluster.org)
12:45 itisravi Just a heads-up.
12:46 matclayton ouch, when is 3.7.7 expected?
12:46 itisravi This month I think.
12:46 matclayton or does this only impact failure cases during setup?
12:48 itisravi It happens when you perform a 'gluster volume set` operation or when glusterd restarts etc.
12:49 itisravi If you would rather not wait for the release, you could compile the 3.7 branch from source.
12:49 skoduri_ joined #gluster
12:51 kotreshhr left #gluster
12:53 arcolife joined #gluster
12:55 EinstCra_ joined #gluster
13:01 rofox joined #gluster
13:01 rofox Hallo
13:01 rajeshj joined #gluster
13:02 rofox first time here.
13:03 rofox I have a question about bumping the op-version in /var/lib/glusterd/vols/{volname}/info
13:04 rofox i'm upgraded a version from 3.4 to 3.6
13:04 ira joined #gluster
13:04 matclayton itisravi: so if I run volume set now on 3.7.6 and then we manually restart it, data will go into the arbitor?
13:04 rofox I see op version changed from 2 to 30600
13:05 matclayton itisravi: does that happen in normal operation and not a server crash?
13:05 rofox in my info it is still op-version 2
13:05 susant left #gluster
13:05 itisravi matclayton: yes I'm afraid it would :(
13:06 itisravi yes.
13:06 spalai joined #gluster
13:06 matclayton ouch…..
13:08 vmallika joined #gluster
13:08 side_control joined #gluster
13:13 arcolife joined #gluster
13:14 matclayton itisravi: what about rolling out with RF2 and adding an arbitor later, would you advise this?
13:17 itisravi matclayton: That could be done. But adding a 3rd brick as arbiter is not yet supported. I'm planning to add it for 3.8
13:17 matclayton ah ok
13:17 matclayton itisravi: any more specific date when 3.7.7 might land
13:18 nangthang joined #gluster
13:18 matclayton we have an old cluster about to run out of space and a new one going online now, so need to make a call about if we wait for this or launch with RF2
13:18 Lee1092 joined #gluster
13:18 itisravi hagarth: ^ would you know?
13:21 poornimag joined #gluster
13:22 hagarth matclayton: aiming to get 3.7.7 out in the next 2 weeks
13:23 unclemarc joined #gluster
13:24 matclayton perfect :)
13:25 rafi joined #gluster
13:27 mobaer joined #gluster
13:30 zhangjn joined #gluster
13:30 d0nn1e joined #gluster
13:31 bluenemo joined #gluster
13:34 aravindavk joined #gluster
13:46 arcolife joined #gluster
13:51 plarsen joined #gluster
13:53 jrm16020 joined #gluster
13:57 zhangjn joined #gluster
13:58 zhangjn joined #gluster
13:58 kovshenin joined #gluster
14:00 zhangjn joined #gluster
14:01 zhangjn joined #gluster
14:01 EinstCrazy joined #gluster
14:02 shyam joined #gluster
14:06 julim joined #gluster
14:10 zhangjn joined #gluster
14:12 spalai joined #gluster
14:19 Manikandan joined #gluster
14:21 mlncn joined #gluster
14:24 hamiller joined #gluster
14:25 chirino joined #gluster
14:25 nbalacha joined #gluster
14:27 ir2ivps5 joined #gluster
14:36 jockek joined #gluster
14:39 Park joined #gluster
14:43 spalai joined #gluster
14:43 theron joined #gluster
14:44 zhangjn joined #gluster
14:52 arcolife joined #gluster
14:53 skylar joined #gluster
15:01 glafouille joined #gluster
15:06 spalai joined #gluster
15:06 julim joined #gluster
15:13 hagarth joined #gluster
15:23 shyam joined #gluster
15:23 matclayton joined #gluster
15:25 corretico joined #gluster
15:26 klaxa|work joined #gluster
15:29 ivan_rossi joined #gluster
15:30 amye1 joined #gluster
15:32 julim joined #gluster
15:34 bennyturns joined #gluster
15:37 ayma joined #gluster
15:45 ayma joined #gluster
15:46 dlambrig_ joined #gluster
15:50 spalai joined #gluster
15:51 kovshenin joined #gluster
15:51 maserati joined #gluster
15:51 haomaiwang joined #gluster
15:55 coredump joined #gluster
16:01 7JTAA99X4 joined #gluster
16:01 bowhunter joined #gluster
16:02 haomaiwang joined #gluster
16:03 haomaiwang joined #gluster
16:03 julim joined #gluster
16:04 haomaiwa_ joined #gluster
16:05 haomaiwang joined #gluster
16:06 haomaiwa_ joined #gluster
16:07 haomaiwa_ joined #gluster
16:08 haomaiwang joined #gluster
16:09 haomaiwang joined #gluster
16:10 18VAAEQ5T joined #gluster
16:11 haomaiwa_ joined #gluster
16:12 haomaiwang joined #gluster
16:13 theron joined #gluster
16:13 18VAAEQ8F joined #gluster
16:17 jobewan joined #gluster
16:18 dblack joined #gluster
16:26 squizzi_ joined #gluster
16:26 plarsen joined #gluster
16:29 josh__ joined #gluster
16:38 chirino joined #gluster
16:45 sghatty_ All: does anyone know if gluster fs supports directory level snapshots?
16:49 sghatty_ skoduri, csaba, kkeithley, all: Quick question : Does glusterfs support directory level snapshots? any help is appreciated!
16:50 shyam sghatty_: No, it supports volume level snaps only (based on LVM snaps of the bricks)
16:51 sghatty_ thank you, shyam!
16:54 EinstCrazy joined #gluster
16:58 sghatty_ shyam, all: another question: Is there an upper limit on the number of gluster nodes / peers that can be part of a trusted pool?
17:01 sghatty_ Looks like there is support for file level snapshots: http://www.gluster.org/community/documentation/index.php/Features/File_Snapshot.
17:01 sghatty_ I wonder how this works.
17:02 bennyturns joined #gluster
17:02 sghatty_ Also, there is a bug open for directory level snapshots. has this ever been implemented? http://www.gluster.org/pipermail/bugs/2015-May/022442.html
17:03 glusterbot Title: [Bugs] [Bug 1226210] New: [FEAT] directory level snapshot clone (at www.gluster.org)
17:11 calavera joined #gluster
17:12 atalur joined #gluster
17:16 inhumantsar joined #gluster
17:17 inhumantsar g'd morning to those of you in north america
17:17 inhumantsar (or south america)
17:18 theron joined #gluster
17:19 skoduri joined #gluster
17:20 inhumantsar quick question about architecting my application using gluster... it's looking like a good choice for large file storage/retrieval. however, part of my workflow seems like it might be abusive
17:22 inhumantsar locA/file[1...n] (0.5-2GB) gets copied to locB/file[1...n], which is then zipped. each copy/zip operation gets its own thread
17:24 inhumantsar this can happen on multiple workers simultaneously, though they all do their own copy/zip ops according to their own schedules, so the workload should be spread somewhat evenly throughout the day
17:25 inhumantsar is this something gluster should handle without too much trouble?
17:27 chirino joined #gluster
17:29 theron joined #gluster
17:34 Humble joined #gluster
17:41 nathwill joined #gluster
17:55 ayma joined #gluster
18:02 bennyturns joined #gluster
18:06 ivan_rossi left #gluster
18:12 skoduri joined #gluster
18:12 PatNarciso ... there is something elementary I'm missing about the search on review.gluster.org... how do I search by keyword?  each query I perform results in an error...
18:12 PatNarciso howz do I usez da internets?
18:13 ndevos PatNarciso: try "message:keyword"
18:13 PatNarciso perfect-- thats the detail I needed.  thanks!
18:13 glusterbot PatNarciso: perfect's karma is now -1
18:13 ndevos PatNarciso: and combine that with "status:open" or "status:closed", I think it defaults to only show patches under review, not merged yet
18:14 rafi joined #gluster
18:15 PatNarciso thanks.  bookmarking "status:open message:tier".
18:23 mlncn joined #gluster
18:24 Rapture joined #gluster
18:28 kovshenin joined #gluster
18:33 kovshenin joined #gluster
18:40 F2Knight joined #gluster
19:11 hagarth_ joined #gluster
19:13 theron joined #gluster
19:16 hagarth_ left #gluster
19:21 mlncn joined #gluster
19:26 theron joined #gluster
19:35 arcolife joined #gluster
19:47 kovshenin joined #gluster
19:48 purpleidea joined #gluster
19:48 purpleidea joined #gluster
19:52 Bardack joined #gluster
19:54 josh__ i have a gluster volume that keeps throwing io errors.  the vms on that volume keep pausing.  any ideas where to start looking? i see this error in the volume log "[fuse-bridge.c:1282:fuse_err_cbk] 0-glusterfs-fuse: 11118: FSYNC() ERR => -1 (Input/output error)"
20:00 josh__ it is a distributed volume with a single brick and that brick is up
20:18 shyam josh__: looks like you are facing what is being fixed here: http://review.gluster.org/#/c/12594/ you could try disabling write-behind and checking if that fixes things up.
20:18 glusterbot Title: Gerrit Code Review (at review.gluster.org)
20:19 mhulsman joined #gluster
20:26 skylar joined #gluster
20:48 theron joined #gluster
20:48 adama joined #gluster
20:55 EinstCrazy joined #gluster
20:56 LeviDeHaan joined #gluster
20:58 LeviDeHaan Hi, has anyone here setup a gluster + xen configuration and had success with it? I've read conflicting articles. There is even a xen image on the gluster site, but that seems to be enabling gluster between vm's but not between vm hosts underlying the vm's.
20:59 josh__ shyam: thank you
20:59 josh__ LeviDeHaan: I am using glusterfs as storage for an oVirt setup, which has worked OK
21:01 LeviDeHaan so you are using glusterfs as a nfs storage on xen for your vm images?
21:01 LeviDeHaan and all storage for the vm's is being allocated via your glusterfs instance and not running on the vm's as a shared storage between running virtual machines?
21:02 LeviDeHaan I just want to be clear, i read a thread on the xen forums which bummed me out because there was a guy saying it was unreliable in any configuration he'd tried it out on and went back to nfs with its SPOF issues
21:02 lpabon joined #gluster
21:07 josh__ i don't use xen.  i use ovirt.
21:08 calavera joined #gluster
21:10 LeviDeHaan gotcha
21:11 mhulsman joined #gluster
21:19 cjellick joined #gluster
21:43 tessier joined #gluster
21:46 diegows joined #gluster
21:53 zoldar joined #gluster
22:23 jwang joined #gluster
22:25 JoeJulian LeviDeHaan: Unfortunately, xen does not have libgfapi support, nor does it support non-native filesystems for domU images. Some people have used iscsi for presenting gluster storage to xen: http://blog.gluster.org/2013/11/a-gluster-block-interface-performance-and-configuration/
22:36 JesperA joined #gluster
22:43 gildub joined #gluster
22:46 wehde joined #gluster
22:47 wehde does anyone know if you have to enable granular locking as an option or is it default?
22:47 JoeJulian Its the default.
22:47 wehde any idea why it locks a vm and cause it to freeze when self healing
22:49 wehde i have 3 nodes. powered one down to change some bios settings and then powered up and the gluster volume is undergoing self heal
22:49 wehde now my kvm vm's are frozen
22:50 coredump joined #gluster
22:50 wehde gluster version 3.7.5
22:51 JoeJulian Guesses would be resource starvation, something blocking or shutting down the network before the bricks close the tcp connections causing a ping-timeout delay...
22:51 JoeJulian Check your logs.
22:52 wehde all nodes are able to reach each other and the gluster volume shows them all up
22:53 wehde will check logs tomorrow
22:53 wehde nothing getting done tonight
23:01 jrm16020 joined #gluster
23:01 theron joined #gluster
23:05 wushudoin joined #gluster
23:08 mrEriksson joined #gluster
23:10 mlncn joined #gluster
23:13 mlncn joined #gluster
23:25 calavera joined #gluster
23:26 delhage joined #gluster
23:41 delhage joined #gluster
23:45 marlinc joined #gluster
23:49 n0b0dyh3r3 joined #gluster
23:53 frozengeek joined #gluster
23:56 dlambrig joined #gluster
23:59 cjellick joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary