Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 TrDS hi... i have "simulated" a disk crash by replacing a disk containing a gluster brick (version 3.3.2) and restoring all data from a backup, but the backup does not contain the .glusterfs directory... gluster seems to be not very happy, files from the replaced brick do not show up in the mounted volume... how can the metadata(?) on the replaced brick be restored?
00:02 ninkotech__ joined #gluster
00:04 pk1 joined #gluster
00:05 TrDS btw. it's 1 of 3 bricks from a distributed-only volume
00:06 nueces joined #gluster
00:08 jbrooks ports
00:11 psyl0n joined #gluster
00:14 bennyturns joined #gluster
00:16 ninkotech__ joined #gluster
00:24 ninkotech__ joined #gluster
00:26 davidbierce joined #gluster
00:35 ninkotech__ joined #gluster
00:37 ninkotech joined #gluster
00:41 bennyturns joined #gluster
00:41 ninkotech_ joined #gluster
00:42 yinyin joined #gluster
00:49 ninkotech_ joined #gluster
00:50 TrDS would remove-brick and then add-brick help?
00:50 ninkotech__ joined #gluster
00:57 gtobon joined #gluster
00:59 _pol_ joined #gluster
01:00 ninkotech__ joined #gluster
01:00 ninkotech_ joined #gluster
01:09 ninkotech_ joined #gluster
01:10 davidbierce joined #gluster
01:15 ninkotech_ joined #gluster
01:22 ninkotech joined #gluster
01:27 ninkotech_ joined #gluster
01:33 fen_ joined #gluster
01:36 fen_ Does anyone have a link to a good by-the-book tutorial on setting up master/master georeplication?
01:36 ninkotech joined #gluster
01:39 pk1 left #gluster
01:45 ninkotech_ joined #gluster
01:46 KORG joined #gluster
01:51 ninkotech_ joined #gluster
01:54 ninkotech joined #gluster
02:01 ninkotech__ joined #gluster
02:02 ninkotech_ joined #gluster
02:04 psyl0n joined #gluster
02:06 shyam joined #gluster
02:12 ninkotech__ joined #gluster
02:12 ninkotech__ joined #gluster
02:14 ninkotech_ joined #gluster
02:21 ninkotech_ joined #gluster
02:27 _Bryan_ joined #gluster
02:27 kanagaraj joined #gluster
02:28 ninkotech__ joined #gluster
02:33 ninkotech_ joined #gluster
02:38 theron joined #gluster
02:39 TrDS left #gluster
02:46 kshlm joined #gluster
02:49 ninkotech_ joined #gluster
02:56 bharata-rao joined #gluster
02:59 gmcwhistler joined #gluster
02:59 ninkotech_ joined #gluster
03:02 DV joined #gluster
03:06 ninkotech__ joined #gluster
03:10 glusternoob joined #gluster
03:11 glusternoob joined #gluster
03:13 neofob joined #gluster
03:14 gmcwhistler joined #gluster
03:16 rjoseph joined #gluster
03:16 glusternoob Hello, I have a question about how file permissions are handled with gluster.  I've got a volume that I've mounted with the native client and I'm re-exporting it as a cifs share. The default permissions of the cifs share is root:root.  If I apply permissions to the cifs share changing it to userx:userx, it works fine but does not persist after reboot and get re-owned back to root.  All extended acls I set persist.  Is there
03:16 glusternoob something I'm doing wrong? Is this intended behaviour?
03:43 itisravi joined #gluster
03:44 pk joined #gluster
03:48 ninkotech joined #gluster
03:52 jag3773 joined #gluster
03:55 ppai joined #gluster
03:56 shubhendu joined #gluster
03:58 ninkotech joined #gluster
04:13 ninkotech joined #gluster
04:15 shyam joined #gluster
04:21 ninkotech joined #gluster
04:29 kdhananjay joined #gluster
04:30 ndarshan joined #gluster
04:32 ninkotech joined #gluster
04:43 Paul-C left #gluster
04:46 dusmant joined #gluster
04:57 RameshN joined #gluster
04:57 bala joined #gluster
04:57 ninkotech joined #gluster
05:08 MiteshShah joined #gluster
05:08 CheRi joined #gluster
05:13 timothy joined #gluster
05:19 ninkotech joined #gluster
05:22 vpshastry joined #gluster
05:22 saurabh joined #gluster
05:30 raghu joined #gluster
05:33 dylan_ joined #gluster
05:38 ninkotech joined #gluster
05:46 shruti joined #gluster
05:46 hagarth joined #gluster
05:52 gmcwhistler joined #gluster
05:58 timothy joined #gluster
06:02 dusmant joined #gluster
06:10 bulde joined #gluster
06:12 _pol joined #gluster
06:13 shylesh joined #gluster
06:13 zeittunnel joined #gluster
06:16 leblaaanc joined #gluster
06:22 mohankumar joined #gluster
06:29 kanagaraj_ joined #gluster
06:29 prasanth joined #gluster
06:38 overclk joined #gluster
06:44 TvL2386 joined #gluster
06:48 psharma joined #gluster
06:51 ricky-ti1 joined #gluster
06:52 kanagaraj joined #gluster
06:55 harish joined #gluster
07:11 ngoswami joined #gluster
07:12 anands joined #gluster
07:21 jtux joined #gluster
07:28 _pol joined #gluster
07:42 MiteshShah joined #gluster
07:42 baul left #gluster
07:42 sheldonh joined #gluster
07:44 ninkotech_ joined #gluster
07:49 ekuric joined #gluster
07:49 glusterbot New news from resolvedglusterbugs: [Bug 962619] glusterd crashes on volume-stop <https://bugzilla.redhat.com/show_bug.cgi?id=962619>
07:52 thogue joined #gluster
07:55 ctria joined #gluster
08:00 andreask joined #gluster
08:15 eseyman joined #gluster
08:23 keytab joined #gluster
08:23 bharata-rao Can someone explain why mounting a volume via mount cmd  fails while it succeeds with glusterfs cmd as shown here with logs at dpaste.com/1502824 ?
08:29 ninkotech__ joined #gluster
08:30 ninkotech_ joined #gluster
08:30 xavih pk: yes, I'm Xavier Hernandez
08:39 ninkotech__ joined #gluster
08:50 dusmant joined #gluster
08:51 ninkotech_ joined #gluster
08:52 hagarth joined #gluster
08:52 ninkotech__ joined #gluster
08:57 nshaikh joined #gluster
08:57 timothy joined #gluster
09:00 ninkotech joined #gluster
09:01 TrDS joined #gluster
09:07 pk xavih: ping
09:11 pk xavih: I was just wondering if you had any more concerns about change in syncop infra.
09:11 pk xavih: I will be sending final patch for this on Monday
09:11 GabrieleV joined #gluster
09:12 * PatNarciso yawns
09:16 ninkotech joined #gluster
09:16 ninkotech_ joined #gluster
09:17 clag_ joined #gluster
09:17 dusmant joined #gluster
09:17 clag_ left #gluster
09:26 GabrieleV joined #gluster
09:27 xavih pk: no, I just wasn't sure that the same problem couldn't happen to other tls data
09:29 xavih pk: now it seems that everything else is OK. The only possible problem could be a use of uuid_utoa() or lkowner_utoa() for a purpose they haven't been designed
09:30 calum_ joined #gluster
09:30 pk xavih: cool then. I will send the final patch on Monday. Thanks for your time
09:30 xavih pk: I sould have reviewed the patch, sorry... :(
09:30 xavih pk: you're welcome
09:33 ndevos bharata-rao: uhh... maybe try with 'mount -t glusterfs -o log-level=DEBUG ...' ?
09:35 dylan_ joined #gluster
09:35 pk xavih: Its not the final patch. I sent it just to check if there are any big failures in our regression frame-work.
09:35 pk xavih: I will add you for reviewing it when I send it on Monday
09:37 xavih pk: ok, thanks :)
09:38 bharata-rao ndevos, dpaste.com/1502909
09:38 overclk joined #gluster
09:41 ndevos bharata-rao: thats really weird, maybe SElinux is blocking some permissions when /sbin/mount executes mount.glusterfs -> glusterfs?
09:42 ndevos bharata-rao: you could try to execute '/sbin/mount.glusterfs openstack:test /mnt' directly, maybe that gives a hint
09:44 timothy joined #gluster
09:46 bharata-rao ndevos, I have disabled selinux on this VM totally
09:47 bharata-rao ndevos, mount.glusterfs is failing similar to mount
09:51 ninkotech_ joined #gluster
09:52 ninkotech joined #gluster
09:52 ndevos bharata-rao: I dont have any further ideas yet...
09:53 bharata-rao ndevos, thanks :)
09:54 ndevos bharata-rao: got any errors in the brick logs?
09:54 bharata-rao ndevos, trying to use gluster as cinder backend, but looks like mounting from cinder service is failing, so I am manually mounting now
09:54 bharata-rao ndevos, let me check
09:55 piotrektt joined #gluster
09:56 nshaikh joined #gluster
10:01 bharata-rao ndevos, in the failure case(mount -t), no brick logs, in the success case (glusterfs -s), I get just "...[server-handshake.c:567:server_setvolume] 0-test-server: accepted client from..." message
10:02 * bharata-rao checks glusterd.log
10:04 spandit joined #gluster
10:06 kanagaraj joined #gluster
10:06 ninkotech_ joined #gluster
10:12 aravindavk joined #gluster
10:15 TrDS hi... i have "simulated" a disk crash by replacing a disk containing a gluster brick (version 3.3.2) and restoring the files from a backup, but the backup does not contain the .glusterfs directory and the extended attributes... gluster seems to be not very happy, ls on the mounted volumes shows sometimes the files from the other bricks, sometimes the files from the replaced brick, including the placeholders with sticky bit...
10:17 glusterbot New news from newglusterbugs: [Bug 1042764] glusterfsd process crashes while doing ltable cleanup <https://bugzilla.redhat.com/show_bug.cgi?id=1042764>
10:21 RameshN joined #gluster
10:25 kanagaraj_ joined #gluster
10:27 timothy joined #gluster
10:31 gdubreui joined #gluster
10:34 pk left #gluster
10:38 dylan_ joined #gluster
10:39 zwu joined #gluster
10:43 hybrid512 joined #gluster
10:43 ninkotech__ joined #gluster
10:47 RameshN joined #gluster
10:51 psyl0n joined #gluster
10:51 ndarshan joined #gluster
10:53 hagarth joined #gluster
10:54 MiteshShah joined #gluster
10:54 prasanth joined #gluster
11:01 ppai joined #gluster
11:15 ninkotech joined #gluster
11:17 overclk joined #gluster
11:35 XATRIX joined #gluster
11:35 XATRIX Ah.. glusterfs fuse, is extremely slow
11:39 samppah XATRIX: hello again
11:39 FarbrorLeon joined #gluster
11:39 samppah can you describe "extremely slow"?
11:42 mozgy joined #gluster
11:44 mozgy hello, any tips for upgrading gluster 3.2 -> 3.4 (basically centos 6.4 -> 6.5)
11:45 hagarth joined #gluster
11:48 ndevos mozgy: check http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4/
11:48 XATRIX samppah: i've put my openvz container on a partition which is exported via gluster
11:48 XATRIX And mounted by ve1-ua:storage on /mnt/pve/storage type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
11:48 XATRIX And i was waiting for the VPS server to start up about 15-30 seconds
11:49 mozgy ndevos, yah, reading that already but no mention of errors I'm getting
11:49 XATRIX glusterfsd on a system was eating a whole 1 core of 8
11:49 XATRIX 105% cpu (1of 8 cores) load by glusterfsd
11:50 kanagaraj joined #gluster
11:50 ndevos mozgy: you did the upgrade from 3.2 to 3.3 first?
11:50 XATRIX actually maybe more than 30 secs
11:50 ndevos XATRIX: you dont happen to use ,,(ext4) ?
11:50 glusterbot XATRIX: The ext4 bug has been fixed in 3.3.2 and 3.4.0. Read about the ext4 problem at http://goo.gl/Jytba or follow the bug report here http://goo.gl/CO1VZ
11:50 mozgy uh, no, I didn't
11:50 XATRIX I'm using ext4
11:51 mozgy ehm, so this is missing -> `b) glusterd --xlator-option *.upgrade=on -N` ..
11:52 mozgy on 3.2->3.3 step .. I presume
11:53 ndevos mozgy: right, that post explains the 3.3 -> 3.4 process, and points to an other post for 3.2 -> 3.3
11:54 mozgy ermh, no 3.3.x rpms for centos 6.5 tho
11:54 RameshN joined #gluster
11:59 ndevos mozgy: how about http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.2/CentOS/
11:59 glusterbot Title: Index of /pub/gluster/glusterfs/3.3/3.3.2/CentOS (at download.gluster.org)
11:59 mozgy ndevos, exactly, NO 6.5 dir there
12:00 ndevos mozgy: that does not really matter, earlier versions will work too
12:00 mozgy trying as we speak ..
12:01 mozgy pondering on VolReplica rpmsave diffs :)
12:08 DV joined #gluster
12:09 DV__ joined #gluster
12:15 coxy82 using geo-replication between two volumes over a WAN.  The .glusterfs directory keeps disappearing on the slave volume.  What is going wrong?
12:16 ppai joined #gluster
12:19 mozgy err, crap, can't do rolling 3.2->3.3 ..
12:20 askb joined #gluster
12:32 bala joined #gluster
12:36 mozgy ok, next question, on 3.3.x I do not see an entry in `df` like "host:/VolReplica" any more, is that ok ?
12:47 ira joined #gluster
12:54 edward1 joined #gluster
12:57 mozgy aint working -> E [client-handshake.c:1695:client_query_portmap_cbk] 0-VolReplica-client-1: failed to get the port number for remote subvolume
12:58 mozgy `gluster vol info` says Stopped but `gluster vol start` says Volume already started ..
12:59 mozgy *puzzled*
12:59 bala joined #gluster
12:59 ninkotech joined #gluster
13:07 mbukatov joined #gluster
13:08 zeittunnel joined #gluster
13:13 mozgy any help ?
13:14 marbu joined #gluster
13:21 askb joined #gluster
13:27 bala joined #gluster
13:29 harish joined #gluster
13:31 rwheeler joined #gluster
13:49 B21956 joined #gluster
13:54 diegows joined #gluster
13:56 mozgy d'oh, ok solved it by - service glusterd stop ; rm -fr /var/lib/glusterd ; service glusterd start ..
14:02 bennyturns joined #gluster
14:04 TrDS gluster is "restoring" the trusted.gfid extended attribute on my replaced disk, but the id differs from the other bricks... might this be the reason why i see different content for this directory on each ls run? why is this happening?
14:22 dbruhn joined #gluster
14:37 dannyroberts_ joined #gluster
14:43 sroy_ joined #gluster
14:46 hagarth joined #gluster
14:50 kaptk2 joined #gluster
14:53 sroy_ joined #gluster
15:01 sroy_ joined #gluster
15:02 14WABOKUZ joined #gluster
15:04 sroy_ joined #gluster
15:07 sroy_ joined #gluster
15:08 andreask joined #gluster
15:08 bugs_ joined #gluster
15:11 sroy_ joined #gluster
15:13 ricky-ti1 joined #gluster
15:15 pdrakeweb joined #gluster
15:18 glusterbot New news from newglusterbugs: [Bug 1042894] Missing option acl for glusterfs volume. <https://bugzilla.redhat.com/show_bug.cgi?id=1042894>
15:20 wushudoin joined #gluster
15:21 _Bryan_ joined #gluster
15:24 XATRIX Is there any way mount gluster as non-fuse and non-nfs ?
15:25 XATRIX gfs1.jamescoyle.net:/datastore /mnt/datastore glusterfs defaults,_netdev 0 0 - does it mount it as a gluster ?
15:25 semiosis XATRIX: not "mount" but there is a client api you can use directly in an application
15:25 semiosis mount it as gluster == fuse mount
15:26 XATRIX damn
15:26 XATRIX this is how proxmox mount's the gluster:
15:26 XATRIX ve1-ua:storage on /mnt/pve/storage type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
15:26 ninkotech_ joined #gluster
15:27 XATRIX what if : gfs1.jamescoyle.net:/datastore /mnt/datastore glusterfs defaults,_netdev 0 0
15:27 XATRIX is it equal ? despite the host part, i'll change it
15:29 ndk joined #gluster
15:31 vpshastry joined #gluster
15:31 vpshastry left #gluster
15:43 askb joined #gluster
15:47 ricky-ticky joined #gluster
15:48 zaitcev joined #gluster
15:51 kmai007 joined #gluster
15:52 kmai007 goodmorning, 1 of my 4 gluster brick does not have the same gluster vol configurations, how do i get it back and in sync?
15:52 dusmant joined #gluster
15:55 samppah XATRIX: what glusterfs version you are using?
15:56 samppah kmai007: look at gluster volume sync command
15:57 zerick joined #gluster
15:57 XATRIX glusterfs 3.4.1 built on Oct 14 2013 09:10:06
15:57 XATRIX Repository revision: git://git.gluster.com/glusterfs.git
15:58 XATRIX proxmox 3.1 (debian)
15:58 XATRIX Seems like i do something wrong, but it's slow as hell
15:58 XATRIX And cpu nungry
15:59 movi joined #gluster
16:00 movi hi, i'm having problems understanding the concepts in gluster. to be exact - i want a replicated gluster share, where there are 2 servers. but i don't get the concept of a "brick"
16:01 semiosis movi: ,,(glossary)
16:01 glusterbot movi: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
16:01 movi should i treat bricks as quasi-partitions that are available via that server?
16:01 semiosis a brick is a directory-on-a-server which is used by glusterfs as backend storage, and nothing else (you shouldn't access bricks by any other means than through glusterfs client mounts)
16:01 samppah XATRIX: sorry, i'm not familiar with proxmox.. any error messages or warnings in log files?
16:02 samppah XATRIX: also how slow it is?
16:02 movi semiosis, and then when clients connect, they connect to a volume, not a brick?
16:02 XATRIX extremely slow.. I placed an openvz containter on to gluster partition. It usually starts in 10 sec. It was about 40-55 sec to get ready
16:02 movi a brick is a backend for a volume?
16:03 semiosis bricks are backend storage for a volume, yes
16:03 XATRIX + i coudn't even open my website from the vps
16:03 samppah XATRIX: what about after it has started? does it do image file in start?
16:03 movi ah, ok. that clears it up. thanks
16:03 semiosis when a glusterfs native fuse client connects, it pulls the volume config from the mount server, then makes direct connections to all the bricks.  each brick is exported over the network individually
16:03 semiosis see ,,(mount server)
16:03 glusterbot The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
16:03 semiosis also ,,(processes)
16:03 glusterbot The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
16:04 dylan_ joined #gluster
16:04 semiosis afk
16:05 movi ok, one more question then. am i reading correctly, that when i want to create a replicated volume, i have to specify how many replicas i want upfront? what if i want to add more replicas in the future ? (http://www.gluster.org/community/documentation/index.php/Getting_started_configure)
16:05 glusterbot Title: Getting started configure - GlusterDocumentation (at www.gluster.org)
16:10 XATRIX samppah: what do you mean image file in start ?
16:10 XATRIX It simply start vz container
16:11 XATRIX No images, it's not KVM with .raw or .img file as a disk
16:11 samppah XATRIX: okay
16:11 XATRIX I've also tuned my MTU to 9000 between eth0<->eth0
16:12 XATRIX But the packet length usually < 80-800 bytes
16:13 XATRIX http://ur1.ca/g6nim
16:13 glusterbot Title: #61550 Fedora Project Pastebin (at ur1.ca)
16:15 XATRIX How can i mount my gluster share via command line ?
16:15 XATRIX mount -t glusterfs -o .. ?
16:18 shyam joined #gluster
16:18 XATRIX As long as i know, VZcontainer has a lot of files to work with during startup
16:18 XATRIX It's like a usual linux system
16:19 XATRIX But the shared kernel
16:19 XATRIX So it read/start much files/scripts during startup
16:21 XATRIX Possibly i have very slow performance on reading huge amount of small files
16:32 semiosis movi: you can change the replica count later on using the add-brick command.  by the way, make sure you're using the ,,(latest) version of glusterfs, not some older version bundled with your distro
16:32 glusterbot movi: The latest version is available at http://download.gluster.org/pub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
16:33 movi semiosis, i am, rpms straight from gluster.org
16:33 movi however, any idea if there are any RPMs compiled for SLES11 ?
16:33 movi or do i have to make my own ?
16:34 semiosis idk
16:35 dbruhn__ joined #gluster
16:38 askb joined #gluster
16:38 mozgy joined #gluster
16:38 anands joined #gluster
16:38 brosner joined #gluster
16:38 pravka joined #gluster
16:38 nonsenso joined #gluster
16:38 osiekhan joined #gluster
16:38 Shdwdrgn joined #gluster
16:38 mibby joined #gluster
16:39 TrDS i've written a script to copy the gfid attribute from existing directories to the replaced brick, this seems to help... now i get consistent output from ls
16:40 zerick joined #gluster
16:41 kaptk2 joined #gluster
16:41 TrDS someone here knows about the gluster internals i bet... why did gluster assign new/different gfids to those directories?
16:42 Amanda__ joined #gluster
16:42 johnmark_ joined #gluster
16:47 ira joined #gluster
16:48 brosner_ joined #gluster
16:51 pdrakeweb joined #gluster
16:51 kmai007 @samppah it worked the gluster volume sync HOSTNAME $VOL, ty
16:53 askb joined #gluster
16:54 pravka joined #gluster
16:54 nonsenso joined #gluster
16:54 mibby joined #gluster
16:54 movi question : will gluster play nicely with a btrfs brick ?
16:55 mozgy joined #gluster
16:55 kmai007 any guesses on what would cause this from my fuse client?
16:55 kmai007 [2013-12-10 16:22:42.658176] C [client-handshake.c:127:rpc_client_ping_timer_expired] 0-devstatic-client-1: server 69.58.224.72:49153 has not responded in the last 42 seconds, disconnecting.
16:56 Shdwdrgn joined #gluster
16:56 kmai007 is that the network.ping-timeout? b/c of the 42 sec.
16:58 kmai007 followed by http://fpaste.org/61563/69538781/
16:58 glusterbot Title: #61563 Fedora Project Pastebin (at fpaste.org)
16:58 osiekhan joined #gluster
16:59 anands joined #gluster
17:01 johnbot11 joined #gluster
17:02 aliguori joined #gluster
17:03 semiosis you are having network problems
17:04 semiosis kmai007:
17:04 andreask joined #gluster
17:04 dewey_ joined #gluster
17:04 semiosis rpc_client_ping_timer_expired == ping timeout
17:05 sudhakar joined #gluster
17:06 sudhakar hello  - i have some issues with glusterFS...
17:06 semiosis hi
17:06 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:06 semiosis sudhakar: ^
17:07 sudhakar i have a glusterFS cluster with 8 nodes in AWS
17:07 sudhakar each node is having 2 bricks of 128GB size of each brick
17:08 sudhakar and the volume is a Distributed-Replica
17:08 sudhakar the total capacity of the volume is 2 TB and we have data 42%
17:09 sudhakar for past few days i am seeing a very response for both read & write and i have no idea why its taking so long..
17:10 sudhakar for example from the client machine where i mounted the glusterFS share folder... even "ls -l" command is taking 2 mins to respond
17:10 sudhakar all the glusterFS nodes are c1.xlarge instance
17:11 PatNarciso hmm.
17:11 sudhakar Volume Name: oleshare
17:11 sudhakar Type: Distributed-Replicate
17:11 sudhakar Volume ID: d01e8748-0242-4efd-af14-26a93e0f3919
17:11 sudhakar Status: Started
17:11 sudhakar Number of Bricks: 16 x 2 = 32
17:11 semiosis @later tell sudhakar please use pastie.org or similar for multiline pastes, to avoid being kicked for flooding.  thanks.
17:11 glusterbot semiosis: The operation succeeded.
17:12 sudhakar joined #gluster
17:12 semiosis sudhakar: please use pastie.org or similar for multiline pastes, to avoid being kicked for flooding.  thanks.
17:12 sudhakar sorry guys.. got disconnected
17:12 bala joined #gluster
17:12 sudhakar sure.,.. ack
17:13 semiosis are you using ephemeral or ebs for your bricks?
17:13 semiosis ebs exhibits wild latency variations
17:13 sudhakar http://pastie.org/8550320
17:13 glusterbot Title: #8550320 - Pastie (at pastie.org)
17:13 sudhakar all are EBS volumes
17:13 semiosis also listing directories is a slow operation in glusterfs, thats just a fact
17:13 sudhakar ok..
17:13 semiosis so if you have thousands of files in a directory it's common for a listing to be slooow
17:13 kmai007 @semiosis thanks, i'll see if i can prove them network boys wrong
17:14 semiosis kmai007: send them the log
17:14 semiosis every time someone in here says they have a ping-timeout "but the network is fine" they come back later to say they found the problem with the network :)
17:14 semiosis every time
17:14 kmai007 yeh don't list, or if u do turn off the alias color in ls
17:15 kmai007 fantastic, if i can prove them wrong, i will send you a cupcake
17:15 semiosis hahahaha
17:15 sudhakar semiosis - even the mkdir or saving files are giving timeouts...
17:16 sudhakar not sure if i am missing any configs...
17:16 semiosis hrm ok thats different.  pastie.org your client logs if you can
17:17 semiosis also, wow.  you configured lots of options.  why?  did you test all of those to see if they make a difference somehow?
17:17 kmai007 man i tried pastie.org, but my proxy dude keeps denying me
17:17 kmai007 maybe its the key word pastie
17:17 kmai007 LOL
17:17 sudhakar semiosis - http://pastie.org/8550323
17:17 glusterbot Title: #8550323 - Pastie (at pastie.org)
17:17 semiosis i usually advise to not change volume options unless absolutely necessary
17:17 sudhakar yeah... i tried few options if something helps..
17:17 semiosis kmai007: github gist?  there's plenty of other paste sites out there
17:18 sudhakar we have content management application which will use this filestore on glusterFS to store the items
17:18 semiosis sudhakar: that looks like app logs, not what i need
17:18 semiosis did you mount glusterfs with 'mount -t glusterfs server:oleshare /some/path'?  then pastie the log in /var/log/glusterfs/some-path.log
17:18 sudhakar what do you mean by client logs ?
17:18 mibby joined #gluster
17:19 sudhakar ok.. let me check..
17:23 sudhakar semiosis - http://pastie.org/8550332
17:23 glusterbot Title: #8550332 - Pastie (at pastie.org)
17:23 mohankumar joined #gluster
17:24 sudhakar and this is my mount entry in /etc/fstab
17:24 sudhakar ppenas1:/oleshare /glusterfs/oleshare glusterfs defaults,transport=tcp,_netdev 0 0
17:24 ctria joined #gluster
17:25 semiosis sudhakar: looks like some of those options are causing problems....
17:25 semiosis [2013-12-13 16:36:00.811542] E [xlator.c:390:xlator_init] 0-oleshare-quick-read: Initialization of volume 'oleshare-quick-read' failed, review your volfile again
17:25 semiosis [2013-12-13 16:36:00.814478] E [quick-read.c:827:check_cache_size_ok] 17-oleshare-quick-read: Cache size 17179869184 is greater than the max size of 15711588352
17:25 sudhakar ok ..
17:25 semiosis maybe you should reset the volume to default config
17:25 semiosis iirc, gluster volume reset
17:25 sudhakar ok.. let me try that
17:25 sudhakar and restart the volume ?
17:26 kmai007 oh man so yesterday i was putting out fires, and on a gluster brick that i normally do not execute gluster cmds on
17:26 TrDS another question: i need to rebalance a distributed cluster, but can touch only 2 bricks at a time... are there ways to control the rebalance process or can this be done only by hand? if so, would (stop; delete all .glusterfs dirs; move files; start) be correct?
17:26 semiosis should not need to restart the volume.  config changes can be made online
17:26 kmai007 i ran volume reset devstatic all
17:26 kmai007 and only that bring reset its options
17:26 sudhakar ok..
17:26 kmai007 the other 3 still had their old options
17:27 kmai007 not sure how it became that way
17:27 kmai007 but i ran a gluster volume sync to get it looking like the rest of the gang
17:27 semiosis TrDS: volume set $volname cluster.background-self-heal-count  2
17:29 kmai007 @semiosis; confusing, is that option for self-heal? and rebalance fix-layout was what @TrDS was asking?
17:33 TrDS semiosis: this option constrols the number of threads it seems... by 2 bricks at a time i meant it's possible to move from brick A to brick B, but before changing another brick C, there is a sync operation to be executed (updating snapshot parity information via snapraid)
17:34 lkoranda joined #gluster
17:34 semiosis good point, then idk
17:40 _pol joined #gluster
17:54 neofob joined #gluster
17:55 pdrakeweb joined #gluster
17:59 kmai007 @semiosis do you know if the rpc_client_ping_timer_expired is configurable?  is that the same as network.ping-timeout ?
17:59 semiosis the same
18:00 kmai007 ok boss, so is the function of network.ping-timeout to drop any brick that meets that criteria or all bricks?  reason being i'm seeing on all the bricks; to the second long entry of disconnecting the same client connection
18:02 semiosis the client will wait, until ping-timeout, for a brick to respond, before marking it offline & disconnecting from it
18:03 semiosis since the client needs to talk to all bricks (in general) it will hang until the ping-timeout if a brick becomes unreachable
18:03 kmai007 but never an all or nothing condition
18:03 kmai007 ok i've got some easter egg hunting to do
18:12 pdrakeweb joined #gluster
18:12 kmai007 is there an ideal example of when to use performance.io.threads ?  I want to test, but my flipping dev environment users are all mad because they have deadlines, and i don't have another area to try this change and make sense of what it does
18:13 semiosis in my tests it doesnt do anyhting
18:14 kmai007 here is what I have semiosis
18:14 kmai007 http://ur1.ca/g6oc1
18:14 glusterbot Title: #61591 Fedora Project Pastebin (at ur1.ca)
18:14 kmai007 i know its hard to tell what i need,
18:15 kmai007 but my setup is a web content storage, and i have a client that is exporting that storage via samba
18:15 semiosis how do you know gluster options are a bottleneck?
18:15 semiosis most often the bottleneck is disk & network latency
18:16 semiosis can't tune that with gluster volume set
18:16 kmai007 is it easier to say "i don't need that option" i didn't go crazy, I was actually setting those options to find out why there so much lag yesterday as i was trying to put out fires
18:16 kmai007 you're very wellright
18:16 kmai007 as of right now
18:16 kmai007 the world is a peace
18:16 kmai007 maybe because people took vacation
18:18 kmai007 for those of you who are getting "structure needs cleaning"  I suppose this bug/patch was what I was hoping for in glusterfs-3.4.1-3 https://bugzilla.redhat.com/show_bug.cgi?id=1041109 , patched but not incorporated by this http://review.gluster.org/4989/
18:18 glusterbot Bug 1041109: unspecified, unspecified, ---, csaba, NEW , structure needs cleaning
18:19 jbd1 joined #gluster
18:19 glusterbot New news from newglusterbugs: [Bug 1043009] gluster fails under heavy array job load load <https://bugzilla.redhat.com/show_bug.cgi?id=1043009>
18:20 kmai007 @glusterbot i get this when i click on that link '1043009>' is not a valid bug number nor an alias to a bug.
18:21 kmai007 but the search for the ticket works....
18:21 Alex remove the trailing >
18:21 Alex It's your client that doesn't quite grok > isn't part of the URL
18:22 kmai007 brilliant, thanks
18:28 sroy_ joined #gluster
18:37 Mo__ joined #gluster
18:38 Liquid-- joined #gluster
19:12 vpshastry joined #gluster
19:14 lpabon joined #gluster
19:19 XpineX_ joined #gluster
19:20 Liquid-- joined #gluster
19:23 XpineX__ joined #gluster
19:24 TrDS the rebalance operation could be influenced by a self-written translator, right?
19:26 vpshastry joined #gluster
19:27 theron joined #gluster
19:35 diegows joined #gluster
19:47 flakrat joined #gluster
19:47 flakrat left #gluster
19:51 glusterbot New news from resolvedglusterbugs: [Bug 950083] Merge in the Fedora spec changes to build one single unified spec <https://bugzilla.redhat.com/show_bug.cgi?id=950083>
19:55 sroy_ joined #gluster
19:59 pithagorians joined #gluster
20:14 pithagorians anybody here encounters issues with gluster clients? i have the 3.3 version on both client and server. sometimes the client partition is unmounting unexpectedly
20:17 kmai007 if i were to mount up gluster NFS, are the options of intr, available?
20:18 kmai007 pithagorians: what does your log times show, does it coincide with client/server ?
20:32 kmai007 Does anybody have some awesome guides on how to remove locks from gluster ?
20:34 RedShift joined #gluster
20:40 calum_ joined #gluster
20:42 bulde joined #gluster
20:47 kmai007 there is this https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/ch21s02.html but it doesn't show how to release directories
20:47 glusterbot Title: 21.2. Troubleshooting File Locks - Red Hat Customer Portal (at access.redhat.com)
20:48 failshell joined #gluster
20:52 ninkotech joined #gluster
20:52 ninkotech__ joined #gluster
21:02 _pol joined #gluster
21:07 KORG joined #gluster
21:08 _pol_ joined #gluster
21:21 * johnmark_ checks your last email for exact balanca
21:22 FarbrorLeon joined #gluster
21:29 _pol_ joined #gluster
21:30 XATRIX joined #gluster
21:31 XATRIX Hi guys, yes, i've checked for my performance, and it's really slow as hell
21:31 XATRIX samppah: any idea how to fix it ?
21:31 XATRIX I have 2 nodes, and share is mounted across
21:34 MrNaviPacho joined #gluster
21:35 XATRIX possibly i have ext4 issue
21:38 XATRIX "After a fair bit of testing / discussions with people who know a lot more about storage than I do, we have come to the conclusion that Gluster simply isn't able to perform as needed, primarily due to it using FUSE and the context switching needed to do that "
21:42 jruggiero joined #gluster
21:45 kmai007 so mount it up via NFS
21:45 kmai007 and see if it fairs better
21:50 flrichar joined #gluster
21:51 sroy_ joined #gluster
21:57 XpineX joined #gluster
21:59 evil_andy_ joined #gluster
21:59 evil_andy_ perhaps a silly question, but does glusterfs *require* 2+ nodes in order to create a volume?
22:00 semiosis evil_andy_: glusterfs requires the number of bricks to be a mutliple of the replica count
22:00 semiosis the number of servers is strongly recommended to also be a multiple of the replica count, however this is not an absolute requirement
22:01 evil_andy_ Hmm, I'm trying to set up a really simple volume with no replication, but when I add IP:/export as the brick, it says that the host is not in 'peer in cluster' state
22:01 semiosis @learn replica count as glusterfs requires the number of bricks to be a mutliple of the replica count; the number of servers is strongly recommended to also be a multiple of the replica count, however this is not an absolute requirement
22:01 glusterbot semiosis: The operation succeeded.
22:01 semiosis @replica count
22:01 glusterbot semiosis: glusterfs requires the number of bricks to be a mutliple of the replica count; the number of servers is strongly recommended to also be a multiple of the replica count, however this is not an absolute requirement
22:02 semiosis evil_andy_: have you probed the ,,(hostnames) already?
22:02 glusterbot evil_andy_: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
22:02 evil_andy_ this may be the issue, I'm trying all this with only a single peer
22:02 evil_andy_ itself
22:03 semiosis check to make sure the hostname maps to the machines correct eth0 ip
22:03 semiosis oh maybe you need to map the hostname to 127.0.0.1 in /etc/hosts
22:04 evil_andy_ Ahh, ok. I was actually trying to use the IP, not the hostname. using the hostname worked just fine. sorry!
22:04 semiosis i recommend against using IPs in brick definitions
22:04 semiosis glad you got it working
22:04 evil_andy_ cool, using the hostname did the trick, thanks!
22:05 semiosis yw
22:17 kmai007 so from the initial gluster brick when you peer probe hostname
22:17 kmai007 will the node0 always be displayed as an IP from the peer status cmd from other peers ?
22:18 XATRIX kmai007: yes , it looks much more faster, than if it mounted via fuse.gluster
22:18 semiosis yes until you probe node0 from any of the other peers, see ,,(hostnames)
22:18 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
22:18 kmai007 i always saw this pattern and wondered if that is ALWAYS the case, since i can't get it to display the hostname
22:18 kmai007 oh no joke
22:18 glusternoob joined #gluster
22:19 kmai007 so in essence I'd have to just go out to 1 of the peers and probe node0
22:19 kmai007 thank you semosis and glusterbot
22:20 kmai007 oh snap,  peer probe: success: host omhq1b4f port 24007 already in peer list  it works!
22:20 semiosis kmai007: glusterbot never lies
22:23 kmai007 @XATRIX what are you doing that is "faster" ?
22:23 XATRIX I've just mounted my share using nfs
22:23 kmai007 i know that
22:24 kmai007 are you opening a file, creating 1, deleting 1,
22:24 kmai007 listing
22:24 XATRIX And now, my openvz container is running much more faster than it was when i mount it as fuse.gluster
22:24 XATRIX I'm running  a php code
22:24 kmai007 oh
22:24 kmai007 yeh i haven't messed with php
22:24 kmai007 but from EVERYTHING that i've read
22:24 XATRIX Actually CMS Drupal. So it;s multiple reads from disk
22:24 kmai007 it really depends on how you setup your PHP vs. how FUSE will handle it
22:25 XATRIX Huge amount of read of a small files in common
22:25 kmai007 let me see i think that Joe Julian guy had a write up on it
22:26 kmai007 semiosis: glusterbot is JoeJulian, you sly dog you
22:27 kmai007 XATRIX: http://joejulian.name/blog/nfs-mount-for-glusterfs-gives-better-read-performance-for-small-files/
22:27 glusterbot Title: NFS mount for GlusterFS gives better read performance for small files? (at joejulian.name)
22:27 semiosis apc
22:27 XATRIX alright , gonna read an article
22:27 XATRIX thanks a lot
22:27 semiosis ,,(php)
22:27 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
22:27 glusterbot --fopen-keep-cache
22:27 XATRIX semiosis: already in use
22:28 semiosis XATRIX: if your php files dont change often, and you can restart apache whenever they do, then you can disable stat in apc, this will improve performance tremedously
22:28 XATRIX ok, i'll try it
22:28 semiosis best thing though would be if your framework uses autoloading instead of require/include calls
22:29 XATRIX also as i noticed before, my gluster FUSE eats a hell of a cpu load
22:29 semiosis when we switched on autoloading & removed all the require calls in our old ZF1 app the speed went up a factor of 10 iirc
22:29 XATRIX >40-80% while php run
22:29 semiosis we use only fuse clients
22:29 DV__ joined #gluster
22:29 XATRIX my proxmox server also use gluster.fuse
22:29 kmai007 apc = apache ?
22:30 kmai007 sorry noobie
22:30 XATRIX nope
22:30 XATRIX apc is a caching engine for PHP
22:30 kmai007 tx XATRIX
22:31 leblaaanc joined #gluster
22:32 XATRIX It stores PHP compiled bytecode in memory for a while, and saves your CPU and disk IO by not read/run your PHP scripts over-and over
22:32 leblaaanc JoeJulian: Hey I keep missing you. Do I need to clear out xattr info on a brick so that I can properly reinitialize it if it's been written to outside of clients?
22:32 leblaaanc gluster clients that is
22:40 leblaaanc joined #gluster
22:47 leblaaanc semiosis: you around maybe to lend a hand?
22:55 semiosis what?
22:55 semiosis oh idk the answer
22:56 kmai007 you have an R&D environment to try it to recreate it?  I thought if you reformat the logical volume either as xfs or ext4 it would wipe the data, but i never tried it to know
23:14 dbruhn__ left #gluster
23:44 theron joined #gluster
23:46 neofob joined #gluster
23:50 glusterbot New news from newglusterbugs: [Bug 1028582] GlusterFS files missing randomly - the miss triggers a self heal, then missing files appear. <https://bugzilla.redhat.com/show_bug.cgi?id=1028582>

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary