Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 _pol joined #gluster
00:06 vpshastry left #gluster
00:18 yinyin joined #gluster
00:28 eryc joined #gluster
00:29 hjmangalam1 joined #gluster
00:38 kevein joined #gluster
01:00 majeff joined #gluster
01:13 vpshastry joined #gluster
01:15 flrichar joined #gluster
01:21 __Bryan__ joined #gluster
01:24 bala joined #gluster
01:34 mohankumar__ joined #gluster
01:45 wushudoin joined #gluster
01:51 vpshastry joined #gluster
02:12 yongtaof joined #gluster
02:14 yongtaof When I run glusterfs rebalance one of the server progress is really slow and since it's disk is almost full
02:15 yongtaof what's the best way to decline the disk usage of this brick?
02:15 yongtaof Any sugguestions?
02:17 yongtaof Why the rebalance progress seems block on the brick?
02:20 _pol joined #gluster
02:23 yongtaof01 joined #gluster
02:26 yongtaof any body helps?
02:29 thomaslee joined #gluster
02:41 portante joined #gluster
02:49 lpabon joined #gluster
03:09 anands joined #gluster
03:18 kshlm joined #gluster
03:19 majeff joined #gluster
03:30 bharata joined #gluster
03:44 hjmangalam1 joined #gluster
03:51 saurabh joined #gluster
03:53 mohankumar__ joined #gluster
03:53 mooperd joined #gluster
03:54 sgowda joined #gluster
04:09 yongtaof looks like a bug in __dht_check_free_space
04:23 soukihei joined #gluster
04:24 shylesh joined #gluster
04:28 hchiramm_ joined #gluster
04:40 thomaslee joined #gluster
04:52 thomasl__ joined #gluster
04:52 aravindavk joined #gluster
04:57 Gues_____ joined #gluster
04:59 rastar joined #gluster
05:01 vpshastry1 joined #gluster
05:03 JoeJulian yongtaof: I haven't seen rebalance block, unless a replica needs healed. What version?
05:13 krishna joined #gluster
05:13 satheesh joined #gluster
05:32 Cenbe joined #gluster
05:32 bulde joined #gluster
05:33 satheesh joined #gluster
05:44 hjmangalam1 joined #gluster
05:51 krishna joined #gluster
05:56 an joined #gluster
05:56 lalatenduM joined #gluster
05:57 glusterbot New news from resolvedglusterbugs: [Bug 967031] GlusterFS client crash with sig 11 and core dump after 10-15 minutes <http://goo.gl/tg7AS>
05:59 JoeJulian @yum
05:59 glusterbot JoeJulian: The official glusterfs packages for RHEL/CentOS/SL are available here: http://goo.gl/s077x
05:59 JoeJulian @reconnect
05:59 glusterbot joined #gluster
06:00 rastar joined #gluster
06:13 vshankar joined #gluster
06:19 wgao what's wrong here, print 'May 30 12:54:18 hosth glusterfsd[4828]: [2013-05-30 04:54:18.405604] C [glusterfsd.c:1374:parse_cmdline] 0-glusterfs: ERROR: parsing the volfile failed (No such file or directory)'
06:20 Cenbe joined #gluster
06:21 JoeJulian wgao: how do you get there?
06:21 wgao service glusterfsd status
06:23 JoeJulian what version?
06:23 wgao service glusterd status, print 'May 30 12:54:17 hosth GlusterFS[4802]: [2013-05-30 04:54:17.060642] C [rdma.c:4099:gf_rdma_init] 0-rpc-transport/rdma: Failed to get IB devices'
06:24 wgao 3.4.0beta2
06:24 JoeJulian You don't start glusterfsd, just glusterd.
06:25 JoeJulian The glusterfsd init script is for shutting down gracefully, or for legacy hand-written vol files.
06:25 wgao yes, glusterfsd can NOT be started
06:25 JoeJulian That's what I'm telling you.
06:26 wgao glusterd works , but with issue.
06:26 bala1 joined #gluster
06:27 JoeJulian If that critical error about IB devices is the error, I would file a bug report because that's not a critical failure. It simply means that you have no infiniband devices.
06:27 glusterbot http://goo.gl/UUuCq
06:27 wgao Ohh, it can NOT work on this version ?
06:28 JoeJulian it CAN. But you would have to know how. If you do not know how, then you really would have no need or desire.
06:28 wgao what's that devices ?
06:28 JoeJulian @lucky infiniband
06:29 glusterbot JoeJulian: http://en.wikipedia.org/wiki/InfiniBand
06:30 wgao Thanks, but I want to know how glusterfsd worked ?
06:31 JoeJulian Not really
06:32 JoeJulian The best way to learn how it worked would be to use the command line interface, gluster, and create volumes. Once created, look in /var/lib/glusterd/vols and examine the .vol files.
06:33 JoeJulian You should be able to understand generally how glusterfsd works from those. The beauty of the cli is that it takes away the need to configure volumes by creating those .vol files and also allows dynamic changes to the volume. Dynamic changes are not possible with hand-written vol files under glusterfsd.
06:36 wgao Ohh, a little bit confused for me, thank you, I will be try.
06:37 JoeJulian Good luck.
06:38 ollivera joined #gluster
06:40 StarBeast joined #gluster
06:40 kevein joined #gluster
06:40 snarkyboojum joined #gluster
06:41 JoeJulian Supermathie: Dude! It's totally that bug I referenced, not his firewall. :P
06:42 krishna joined #gluster
06:44 rgustafs joined #gluster
06:44 vimal joined #gluster
06:48 guigui1 joined #gluster
06:49 ekuric joined #gluster
06:50 majeff joined #gluster
06:53 ricky-ticky joined #gluster
07:00 bulde joined #gluster
07:02 krishna joined #gluster
07:02 ctria joined #gluster
07:02 an joined #gluster
07:17 hybrid512 joined #gluster
07:39 dmojorydger joined #gluster
07:47 majeff1 joined #gluster
08:09 ujjain joined #gluster
08:10 hagarth joined #gluster
08:11 dobber_ joined #gluster
08:17 wgao joined #gluster
08:18 manik joined #gluster
08:23 JoeJulian kkeithley: You're up late/early...
08:26 kevein joined #gluster
08:29 yongtaof01 joined #gluster
08:29 rb2k joined #gluster
08:32 hchiramm_ joined #gluster
08:38 ramkrsna joined #gluster
08:38 ramkrsna joined #gluster
08:39 krishna joined #gluster
08:40 majeff joined #gluster
08:40 hchiramm_ joined #gluster
08:46 hchiramm__ joined #gluster
08:47 duerF joined #gluster
08:49 Airbear joined #gluster
08:49 lh joined #gluster
08:49 lh joined #gluster
08:56 andreask joined #gluster
09:18 kshlm joined #gluster
09:23 hagarth @channelstats
09:23 glusterbot hagarth: On #gluster there have been 134440 messages, containing 5765057 characters, 967792 words, 3979 smileys, and 499 frowns; 839 of those messages were ACTIONs. There have been 49716 joins, 1571 parts, 48181 quits, 19 kicks, 141 mode changes, and 5 topic changes. There are currently 202 users and the channel has peaked at 217 users.
09:23 hagarth JoeJulian: kkeithley is in a different TZ :).
09:25 kelkoobenoitr left #gluster
09:29 rastar1 joined #gluster
09:44 tziOm joined #gluster
09:49 d3O joined #gluster
09:52 d3O left #gluster
10:00 glusterbot New news from newglusterbugs: [Bug 952029] Allow an auxiliary mount which lets users access files using only gfids <http://goo.gl/x5z1R>
10:02 wgao how to using 'vdsClient 0 glusterVolumeCreate' create volume ?
10:17 rastar joined #gluster
10:21 nightwalk joined #gluster
10:24 spider_fingers joined #gluster
10:39 edward1 joined #gluster
10:40 piotrektt_ joined #gluster
10:40 vpshastry joined #gluster
10:46 krishna joined #gluster
10:49 krishna left #gluster
10:50 vpshastry joined #gluster
10:50 andreask joined #gluster
10:51 puebele joined #gluster
11:07 Staples84 joined #gluster
11:07 mooperd joined #gluster
11:30 vpshastry joined #gluster
11:39 ska left #gluster
11:44 vpshastry joined #gluster
11:52 kke joined #gluster
11:53 kke getting No space left on device eventhough there is plenty of free space on every brick
11:55 kke [2013-05-30 14:54:54.850756] I [glusterd-handler.c:796:glus​terd_handle_cli_get_volume] 0-glusterd: Received get vol req
11:55 kke [2013-05-30 14:54:54.852335] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:992)
11:55 glusterbot kke: That's just a spurious message which can be safely ignored.
11:55 kke great
11:56 kke [2013-05-30 14:50:29.538674] W [client3_1-fops.c:5158:client3_1_readdirp] 0-m1_data-client-10: (98400): failed to get fd ctx. EBADFD
11:56 kke [2013-05-30 14:50:29.538683] W [client3_1-fops.c:5222:client3_1_readdirp] 0-m1_data-client-10: failed to send the fop: File descriptor in bad state
11:57 kke [2013-05-30 14:19:00.911536] W [fuse-bridge.c:1567:fuse_create_cbk] 0-glusterfs-fuse: 4768387: /attachments/2011/2/26/3/2/3239842e-17f8-42b6​-9786-c8b97c6d2280/.1972_i74MTE.pdf.gfs15650 => -1 (No space left on device)
11:57 kke [2013-05-30 14:19:01.110779] I [client3_1-fops.c:1722:client3_1_create_cbk] 0-m1_data-client-3: remote operation failed: No space left on device
11:59 kke [2013-05-30 14:33:45.325860] E [posix.c:1447:posix_mkdir] 0-m1_data-posix: mkdir of /attachments/2013/5/30/8/0/806e4​d31-8570-40b9-a173-f6b36135e3f8 failed: No space left on device
11:59 kke [2013-05-30 14:33:45.325912] I [server3_1-fops.c:488:server_mkdir_cbk] 0-m1_data-server: 44635: MKDIR /attachments/2013/5/30/8/0/806e4​d31-8570-40b9-a173-f6b36135e3f8  ==> -1 (No space left on device)
12:03 glusterbot New news from newglusterbugs: [Bug 963223] Re-inserting a server in a v3.3.2qa2 distributed-replicate volume DOSes the volume <http://goo.gl/LqgL8>
12:03 d3O_ joined #gluster
12:05 hchiramm_ joined #gluster
12:25 manik joined #gluster
12:25 pkoro joined #gluster
12:31 pkoro Hi people, we had a really peculiar issue yesterday and I wanted to ask the list if anyone has noticed it before. The issue: when adding a new brick to a volume on 2 of our servers on the gluster ring we were receiving the following command: /path/to/newly/created/brick or a prefix of it is already part of a volume . We found out after some time that the xattr trusted.gfid was appointed onto / . We removed it and everything worked out afterwards
12:31 glusterbot pkoro: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
12:32 pkoro Has anyone seen something similar before? I am just asking because there may be a bug somewhere within gluster
12:33 pkoro (i don't think anyone else other that cluster would give that xattr anyway)
12:43 Uzix joined #gluster
12:47 hchiramm_ joined #gluster
12:49 inevity joined #gluster
12:49 baul joined #gluster
12:52 hchiramm__ joined #gluster
12:53 JoeJulian pkoro: Nope, gluster's the only one that would make that xattr. I've never heard of it putting that tag on any directory that wasn't specified in the create command.
12:57 Norky it looks like you (or someone) either reference the root FS / in a create command previous, or rsync'ed (with xattrs) from a brick to the root FS
12:58 mohankumar__ joined #gluster
12:58 JoeJulian Or maliciously setfattr in a drunken stupor to try to teach yourself a lesson!
13:02 Norky devious drunkenself!
13:03 spider_fingers omg)
13:04 robo joined #gluster
13:06 baul joined #gluster
13:06 inevity joined #gluster
13:08 pkoro Well this has been our first thought (some admin made a mistake) but having it on 2 servers (out of a total of 5) puzzled me.
13:10 inevity2 joined #gluster
13:11 js_ anybody here compared glusterfs performance between cloud hosts like linode and aws?
13:14 baul joined #gluster
13:15 codo joined #gluster
13:18 bennyturns joined #gluster
13:18 JoeJulian I think jdarcy did that...
13:18 bala joined #gluster
13:23 JoeJulian http://hekafs.org/index.php/2013/05/​performance-variation-in-the-cloud/
13:23 glusterbot <http://goo.gl/0K8wX> (at hekafs.org)
13:25 an joined #gluster
13:28 dewey joined #gluster
13:29 inevity2 small file in gluster,which load big?lookup or attr?
13:32 shylesh joined #gluster
13:37 H__ dammit, every management action i try (replace-brick, rebalance, bring back a replica half) takes down the entire volume :(
13:38 bleon joined #gluster
13:50 bugs_ joined #gluster
13:50 plarsen joined #gluster
13:54 manik joined #gluster
13:54 majeff joined #gluster
14:00 Supermathie JoeJulian: lol, gotcha. :) (actually I didn't mean to send it to the list. And while hanging out on IRC, I'd see people asking about that CONSTANTLY.)
14:01 wN joined #gluster
14:01 majeff joined #gluster
14:03 spider_fingers left #gluster
14:04 hchiramm_ joined #gluster
14:05 JoeJulian inevity2: lookup
14:06 JoeJulian H__: 3.3.1?
14:07 inevity2 who trigger lookup beside open call?
14:07 JoeJulian stat
14:07 JoeJulian Anything that needs to read from the inode.
14:09 inevity2 i look syscall,only find open trigger guster lookup。but in gluster something also trigger lookup
14:10 inevity2 such as sealheal
14:11 wushudoin joined #gluster
14:11 JoeJulian http://www.tldp.org/LDP/khg/H​yperNews/get/fs/vfstour.html give a detailed description of what the lookup() function is and how it's used.
14:11 glusterbot <http://goo.gl/T4Y3h> (at www.tldp.org)
14:14 __Bryan__ joined #gluster
14:15 portante joined #gluster
14:16 H__ JoeJulian: 3.3git-v3.3.2qa2-3-g3490689
14:23 inevity2 just to find inode by name。in syscall,i just only know open syscall call  lookup,have you know other?
14:25 JoeJulian H__: I encountered that recently with 3.3.1. I didn't have time to try to isolate a cause. I'm trying to remember how I cured it...
14:26 JoeJulian I'm pretty sure I stopped and started the volume...
14:27 kaptk2 joined #gluster
14:32 lpabon joined #gluster
14:36 H__ JoeJulian: i cannot do that I'm afraid. It's running production.
14:38 JoeJulian H__: When it does that, is the only way to get them to start working again to kill all the glusterd?
14:39 hjmangalam1 joined #gluster
14:40 piotrektt joined #gluster
14:47 plarsen joined #gluster
14:50 H__ JoeJulian: I have to stop the rebalance, or stop the replace-brick, or bring the replica-half server down to let the volume recover
14:50 JoeJulian hmm, not the same thing then.
14:55 daMaestro joined #gluster
14:59 piotrektt joined #gluster
15:07 kke is there any way to speed up rebalancing?
15:07 duerF anyone know why I am unable to do a --move mount into a glusterfs mount? "mount --move /mount/point /gluster-filessystem/directory", I get Invalid argument EINVAL, strace output "mount("/root/kreb", "/frica-vol-1/kreb", 0x7f4e9321667b, MS_MGC_VAL|MS_MOVE, NULL) = -1 EINVAL (Invalid argument)"
15:13 duerF for some context, I'm trying to run root (/) off a glusterfs mount, stacked, let's call it boot from glusterfs. That is I run glusterfs nfs service <- nfs mount from localhost in dracut
15:15 jthorne joined #gluster
15:16 piotrektt_ joined #gluster
15:18 andreask joined #gluster
15:28 manik joined #gluster
15:30 18WADLYMM joined #gluster
15:35 samppah @latest
15:35 glusterbot samppah: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
15:37 samppah @ppa
15:37 glusterbot samppah: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY -- and 3.4 packages are here: http://goo.gl/u33hy
15:39 saurabh joined #gluster
15:41 piotrektt_ joined #gluster
15:42 hchiramm_ joined #gluster
15:46 awheeler_ Are there updated 3.3/3.4 docs?  I can only find the 3.2 docs: http://gluster.org/community/documentation/index​.php/Gluster_3.2_Filesystem_Administration_Guide
15:46 glusterbot <http://goo.gl/RkDhU> (at gluster.org)
15:48 awheeler_ nm, I re-found it: http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
15:48 glusterbot <http://goo.gl/bzF5B> (at www.gluster.org)
15:49 zaitcev joined #gluster
15:58 aliguori joined #gluster
16:02 inevity joined #gluster
16:03 ni291187 joined #gluster
16:03 ni291187 left #gluster
16:07 piotrektt__ joined #gluster
16:08 root__ joined #gluster
16:10 RangerRick2 joined #gluster
16:11 devoid joined #gluster
16:11 root__ joined #gluster
16:12 RangerRick2 Hi - we are experiencing very slow write throughput and IOPS using Gluster with only two nodes (we have SSD under the filesystems, so there's plenty of local IOPS available).  Are there tuning or other configuration options/best practices we should use to address the write performance issue?
16:17 lyang0 joined #gluster
16:26 _pol joined #gluster
16:26 kke left #gluster
16:26 piotrektt joined #gluster
16:27 _pol joined #gluster
16:27 hjmangalam1 joined #gluster
16:29 mohankumar__ joined #gluster
16:30 semiosis RangerRick2: two nodes... which are servers & clients?
16:31 semiosis what kind of network connects them?
16:33 RangerRick2 We have two nodes - they are both client and server.  We're testing via NFS server and iSCSI targets on the nodes from VMware as the client via 1 GBE.  Write throughput is 6 MB/sec up to 20 MB/sec vs. read throughput up to 110 MB/sec.
16:35 an_ joined #gluster
16:39 RangerRick2 When we go direct to the node filesystem, bypassing Gluster client/server layers, we see around 50 MB/sec throughput, so the performance penalty appears to be 30 MB to 44 MB/sec.  I suppose 50% overhead would be understandable, but we're seeing a lot more overhead.
16:40 RangerRick2 Does the Gluster client wait for all node writes to complete synchronously before returning a write-complete back to the NFS server / iSCSI target server?
16:41 lbalbalba joined #gluster
16:43 semiosis i think so
16:46 Mo__ joined #gluster
16:47 RangerRick2 Hmmm. Seems like throughput would be much better if Gluster client returned to the caller immediately after the first write response and allow the other node writes/replication to complete in the background.  99.9999% of the time, writes to all nodes will complete okay, and in the 00.0001% of the time a node has an unrecoverable write error, the failed write to that node probably isn't recoverable anyway if retried i
16:48 bala joined #gluster
16:50 semiosis whats the latency between your nodes?
16:52 36DAAPVYG joined #gluster
16:56 samppah joined #gluster
16:56 lbalbalba hi. im running into some problems with scripted bulk modifying of *.vol files.
16:57 lbalbalba when running the prove test, i modified them to, after creation of a volume: stop the volume; stop glusterfsd; sync to disk;modify the vol files to add 'option transport.socket.own-thread on', restart glusterd; restart the volume...
16:57 lbalbalba and yet, somehow, the modifications in the vol files are not made.
16:57 lbalbalba could be a bug in my script, of course
16:57 lbalbalba but... perhaps are the contents of vol files stored somewhere else, as well ?
16:58 inevity2 joined #gluster
16:58 baul2 joined #gluster
16:59 mohankumar__ joined #gluster
16:59 lbalbalba after each 'volume create' statement in the test suite, i run this : http://fpaste.org/15551/33145136/
16:59 glusterbot Title: #15551 Fedora Project Pastebin (at fpaste.org)
17:01 semiosis lbalbalba: i'd be surprised to see that working
17:01 semiosis lbalbalba: the usual advice for people who want to edit volfiles is to give up on glusterd & the gluster CLI
17:02 semiosis and that's really only done by developers afaik, not in produciton
17:05 lbalbalba semiosis: giving up on the cli i can understand, but how does one give up on glusterd ?
17:05 lbalbalba semiosis: you cannot set 'option transport.socket.own-thread on' through the cli, you have to edit the vol file.
17:06 lbalbalba semiosis: http://lists.nongnu.org/archive/html​/gluster-devel/2013-05/msg00157.html
17:06 glusterbot <http://goo.gl/r2SXt> (at lists.nongnu.org)
17:06 semiosis in the days of 3.1 it was at least conceivable to run "old school" without glusterd.  since then so much has been added to it i can't imagine running without it
17:07 semiosis lbalbalba: well we need to ask jdarcy if he ran that without glusterd
17:07 semiosis my guess is he wasn't using glusterd
17:08 semiosis but we're not jdarcy, we're mere mortals :)
17:08 lbalbalba :)
17:08 lbalbalba semiosis: ok. but i still dont understand *why* it doesnt work. doesnt gluster* read the vol files on startup ?
17:10 lbalbalba semiosis: and write them on shutdown ? or something ?
17:12 hagarth joined #gluster
17:14 thomaslee joined #gluster
17:14 lbalbalba semiosis: or does 'type protocol/server' noty get written to a volfile until some magic event occurs ?
17:15 semiosis idk, hopefully a dev is watching & can chime in
17:17 lbalbalba thanks. perhaps i should just make a post on the gluster-dev mailing list, then
17:28 vpshastry1 joined #gluster
17:39 lbalbalba here we go, hope this will help: http://lists.nongnu.org/archive/html​/gluster-devel/2013-05/msg00271.html
17:39 glusterbot <http://goo.gl/Vd2T0> (at lists.nongnu.org)
17:42 erik49 joined #gluster
17:45 _pol joined #gluster
17:46 _pol joined #gluster
17:52 jthorne joined #gluster
17:55 andreask joined #gluster
17:56 edoceo If I get a "entry self-heal failed" on a directory, what's the fix? manually syncing files to bricks some how (rsync) and then kill wipe attributes and try again?
17:57 edoceo That's what I've been doing, seems to mostly work....but am I shooting myself in the foot?
17:59 StarBeast joined #gluster
18:00 hagarth joined #gluster
18:06 lbalbalba edoceo: im by no means an expert, but .. "entry self-heal failed" sounds like its time to restore the backups ?
18:07 edoceo drrur...what is backup? :p
18:12 codo left #gluster
18:12 erik49 joined #gluster
18:18 inevity joined #gluster
18:18 baul joined #gluster
18:19 hagarth joined #gluster
18:21 hjmangalam joined #gluster
18:32 leaky joined #gluster
18:33 leaky hello all, does anyone know how gluster handles corruption in a replica configuration?
18:34 leaky i.e. replica two bricks, one does not write data correctly, how is this identified?
18:36 lbalbalba dunno. 'user level' bad data just gets replicated across the bricks ? just like with raid mirroring ?
18:38 semiosis leaky: see the ,,(extended attributes) article to understand how glusterfs replication works
18:38 glusterbot leaky: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
18:39 leaky thank you
18:40 semiosis yw
18:43 erik49 joined #gluster
18:59 tg2 joined #gluster
19:06 hagarth joined #gluster
19:36 andrewjsledge joined #gluster
19:45 marmoset joined #gluster
19:47 marmoset So I think what I'm looking for is either impossible, or I my google-fu is very weak.  I have a volume with two 10TB bricks.  I have some 100TB bricks to replace them with.  Can I do that, and resize the volume?
19:47 marmoset I'd prefer to not have to create a new volume and remount everything
19:49 erik49 joined #gluster
19:50 semiosis marmoset: there is a replace-brick command, however when I need to expand brick capacity i just replace the underlying disk & let self-heal sync it up
19:50 robo joined #gluster
19:50 Uzix joined #gluster
19:51 semiosis imho replace-brick should only be used if the brick's server or path are changing.
19:51 marmoset this would be changing the server as well
19:51 kaptk2 joined #gluster
19:52 marmoset dev resources got used in prod, now have the prod hardware ;)
19:52 semiosis then replace-brick away
19:52 marmoset cool, so like replace one brick, then the other, and the larger space just shows up?
19:52 semiosis yep
19:52 marmoset awesome, thanks
19:53 semiosis yw
19:53 marmoset left #gluster
19:54 Uzix joined #gluster
19:55 jthorne joined #gluster
19:56 portante joined #gluster
19:57 jdarcy joined #gluster
19:58 jdarcy Boo.
20:06 edoceo Does a gluster Client keep a copy of the volume file? or does it just get written to the log on HUP?
20:08 edoceo Basically I need to, from the client, get a current copy of the volume file
20:10 jdarcy The client doesn't keep a copy of the volfile in the local FS, but there is a way to fetch it.
20:10 edoceo Yea?  I tried doing a HUP to the glusterfs PID but that didn't do it; it does spew the vol file when it starts however -
20:11 jdarcy gluster --remote-host=any_server system getspec my_volume
20:12 jdarcy IIRC, you need to have the server RPM installed, but you don't need to be a member of the cluster.
20:17 * jdarcy tries to figure out who broke that in the last couple of days.  :(
20:20 tylerflint joined #gluster
20:20 tylerflint is there a guide or reference for configuring gluster on ipv6?
20:21 erik49 joined #gluster
20:23 jdarcy Oh great, it's an op_version SNAFU.
20:29 tylerflint so check this out: https://gist.github.com/notxarb/07f5f​52dd6556b53a5a3#file-glusterfs-log-L9
20:29 glusterbot <http://goo.gl/pxt8o> (at gist.github.com)
20:29 tylerflint yet: https://gist.github.com/notxarb/07f5f5​2dd6556b53a5a3#file-glusterfs-log-L39
20:29 glusterbot <http://goo.gl/CD13D> (at gist.github.com)
20:30 tylerflint is that an unsupported option?
20:31 tylerflint ah! there's no "socket" in the keypath
20:31 tylerflint or, I'm not supposed to put "socket" in the keypath
20:32 jdarcy Yes, it's just transport.address-family AFAICT.
20:37 y4m4 joined #gluster
20:40 tylerflint well the config worked, and it's listening on an ipv6 address :)
20:41 tylerflint but it failed to peer probe :(
20:41 tylerflint seen this? Probe returned with unknown errno 134
20:43 jdarcy Firewall problem, maybe?
20:43 jdarcy ISTRC there was also a problem with resolving IPv6 names, which might be related.
20:44 tylerflint so where does the errno come from if it's unknown to gluster?
20:45 jbrooks joined #gluster
20:48 jdarcy 134 is ENOTCONN somewhere, according to comments, but not in my errno.h
20:51 Airbear joined #gluster
20:55 tylerflint jdarcy: you were right -> DNS resolution failed on host 2002:c0a8:da3::a
20:56 tylerflint is this a known issue?
20:57 jdarcy Might be http://review.gluster.org/#/c/4750/ - sad to say, I'm the one blocking that.
20:57 glusterbot Title: Gerrit Code Review (at review.gluster.org)
20:58 semiosis jdarcy: lbalbalba was asking about this earlier... http://lists.nongnu.org/archive/html​/gluster-devel/2013-05/msg00157.html
20:58 glusterbot <http://goo.gl/r2SXt> (at lists.nongnu.org)
20:58 jdarcy Woo, fixed the getspec problem.
20:59 semiosis jdarcy: did you have to give up glusterd & dynamic client mounting to use that option?
20:59 jdarcy semiosis: Which option?
20:59 semiosis transport.socket.own-thread on
21:00 jdarcy semiosis: Well, it's not settable separately from the CLI, so it's likely to get overwritten when the volfiles are (e.g. volume set).
21:00 jdarcy semiosis: However, it's set automatically when client.ssl or server.ssl are, and those *are* CLI-compatible.
21:01 semiosis lbalbalba was trying to get glusterd to just read it from a volfile he edited on disk & serve it to a client
21:01 jdarcy semiosis: Risky proposition.  Nobody can really predict when the volfiles will be regenerated.
21:01 semiosis right
21:02 semiosis from what he was saying, sounded like glusterd was filtering what it read from the volfile, not sending all options to the clients
21:02 semiosis seems plausible
21:03 jdarcy semiosis: However, the script to add it could be invoked via /usr/lib*/glusterfs/$VERSION/filter
21:04 semiosis interesting
21:04 jdarcy semiosis: Scripts there should be invoked any time a volfile is rewritten, with the path to the volfile as an argument.
21:04 semiosis any idea when that was introduced?
21:05 jdarcy semiosis: Ages ago, for HekaFS.
21:05 semiosis cool
21:06 semiosis i'll poke around and see if it's in 3.3.2 or 3.4
21:07 jdarcy 86ed5d68596e577b4d923750a619a6921f2cfe18 from October 3, 2011.
21:07 jdarcy It's in 3.3 and 3.4
21:08 semiosis thanks!
21:09 kke joined #gluster
21:09 semiosis @later tell lbalbalba check this out: http://irclog.perlgeek.de/g​luster/2013-05-30#i_7134549
21:09 glusterbot semiosis: The operation succeeded.
21:10 kke is there some way to estimate how much layout fixing is still left?
21:13 jdarcy kke: Not as far as I know.
21:14 kke any idea what the number represents?
21:22 jdarcy What number?
21:37 glusterbot New news from newglusterbugs: [Bug 969193] "gluster volume getspec" is broken (op_version problem) <http://goo.gl/lbysv>
21:50 kke jdarcy: rebalance step 1: layout fix in progress: fixed layout 471565
21:53 leaky left #gluster
22:00 jdarcy Hm.  Can't find that string in current source, but I'd guess it's the number of directories that have already been fixed.  Can't know how many are left without knowing how many there are total, which would involve crawling the entire filesystem.
22:03 kke which is not that big a deal, i bet that runs through in a hour max
22:03 kke also i know the number of files by the number of rows in a db
22:04 kke which can be converted to number of directories by multiplying by roughly 4
22:05 kke only about 350gb of data
22:06 kke actually number of files should be lower than number of directories
22:07 kke that didn't go right
22:07 kke actually number of files should be higher than number of directories
22:08 kke they're organized like /2013/05/31/1st_char_of_file_pa​rent_id/2nd_char_of_parent_id/
22:08 kke and finally the /attachment_id
22:09 kke so actually the number of parent objects should roughly translate to number of directories
22:13 kke hmm, 500k done, 4.5 million to go? doesn't sound good, need to come up with some other solution i think.. maybe move off some old files to make more room on the older bricks
22:15 kke at about 100k/hour it should take 45 more hours and production is kind of halted. not good :D
22:21 rb2k joined #gluster
22:24 tylerflint is nfs disabled by default?
22:31 JordanHackworth joined #gluster
22:34 johnmark joined #gluster
22:34 krishna- joined #gluster
22:38 arusso joined #gluster
22:40 arusso joined #gluster
22:44 JordanHackworth joined #gluster
22:49 thekev joined #gluster
22:50 jthorne joined #gluster
22:53 edoceo tylerflint: nfs is on by default from what I see
22:54 tylerflint thanks
22:57 erik49_ joined #gluster
23:31 StarBeast joined #gluster
23:33 jbrooks joined #gluster
23:36 smellis joined #gluster
23:55 rb2k joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary