Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 uebera|| joined #gluster
00:03 uebera|| joined #gluster
00:03 capri joined #gluster
00:04 JoseBravo joined #gluster
00:05 JoseBravo In my setup of gluster geo-replication changes I made or deletes are not beign propagated to the slave. If I try to change the ingore-deletes I get an error (Reserved Option)
00:06 JoseBravo gluster volume geo-replication home-pi1-1 pi1-2::home-pi1-2 config ignore-deletes 1
00:06 JoseBravo Reserved option
00:06 JoseBravo geo-replication command failed
00:07 JoseBravo Right now it's only replicating new files, if I edit a file it's not beign replicated. I tried with Changelog Crawl and Hybrid Crawl but with both I get the same problem
00:16 uebera|| joined #gluster
00:16 capri joined #gluster
00:21 uebera|| joined #gluster
00:21 uebera|| joined #gluster
00:23 capri joined #gluster
00:31 jotterbot joined #gluster
00:43 RicardoSSP joined #gluster
00:47 sputnik13 joined #gluster
01:05 sputnik13 joined #gluster
01:16 justinmburrous joined #gluster
01:19 msmith_ joined #gluster
01:22 sputnik13 joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.or​g/p/GlusterFS-3.6-test-doc
01:52 msmith_ joined #gluster
02:15 justinmb_ joined #gluster
02:22 jobewan joined #gluster
02:31 haomaiwa_ joined #gluster
02:34 kdhananjay joined #gluster
02:52 msmith__ joined #gluster
03:01 msmith_ joined #gluster
03:07 msmith_ joined #gluster
03:08 msmith_ joined #gluster
03:08 kdhananjay joined #gluster
03:14 bharata-rao joined #gluster
03:23 overclk joined #gluster
03:34 sks joined #gluster
03:42 spandit joined #gluster
03:45 nshaikh joined #gluster
03:46 gildub joined #gluster
03:48 itisravi joined #gluster
04:00 RameshN joined #gluster
04:05 nbalachandran joined #gluster
04:08 jobewan joined #gluster
04:29 kanagaraj joined #gluster
04:32 anoopcs joined #gluster
04:39 soumya joined #gluster
04:42 msmith_ joined #gluster
04:42 Rafi_kc joined #gluster
04:43 rafi1 joined #gluster
04:52 kshlm joined #gluster
05:07 ramteid joined #gluster
05:19 soumya joined #gluster
05:31 justinmburrous joined #gluster
05:32 nshaikh joined #gluster
05:33 aravindavk joined #gluster
05:33 overclk_ joined #gluster
05:36 hagarth joined #gluster
05:43 msmith_ joined #gluster
05:47 soumya joined #gluster
05:54 zerick joined #gluster
06:05 soumya joined #gluster
06:07 raghu joined #gluster
06:14 overclk joined #gluster
06:20 karnan joined #gluster
06:40 justinmburrous joined #gluster
06:42 nthomas_ joined #gluster
06:43 msmith_ joined #gluster
06:47 haomaiwang joined #gluster
06:53 ekuric joined #gluster
06:56 overclk joined #gluster
06:57 ctria joined #gluster
07:00 Fen1 joined #gluster
07:09 lalatenduM joined #gluster
07:14 bharata_ joined #gluster
07:14 fsimonce joined #gluster
07:16 DV__ joined #gluster
07:21 elico joined #gluster
07:21 user_42 joined #gluster
07:43 user_42 left #gluster
07:44 msmith_ joined #gluster
07:54 _NiC joined #gluster
07:55 DV__ joined #gluster
07:58 harish joined #gluster
08:01 anands joined #gluster
08:01 Slydder joined #gluster
08:04 nthomas_ joined #gluster
08:07 liquidat joined #gluster
08:08 Arrfab joined #gluster
08:08 Slydder morning all
08:10 Fen1 Hi !
08:11 overclk joined #gluster
08:11 Fen1 Can we creat an XFS partion on a VM ?
08:11 overclk joined #gluster
08:16 meghanam joined #gluster
08:18 Slydder depends on your VM.
08:20 Slydder hey guys. got a strange situation here. I have a single gfs node (3.5.2) with a single volume mounted either per nfs or glusterfs where updating existing files hangs. new files and deleting existing files work as expected though.
08:30 drajen joined #gluster
08:36 soumya joined #gluster
08:36 meghanam joined #gluster
08:36 meghanam_ joined #gluster
08:42 soumya joined #gluster
08:45 msmith_ joined #gluster
08:53 justinmburrous joined #gluster
08:54 tryggvil joined #gluster
08:55 TvL2386 joined #gluster
08:59 RameshN joined #gluster
08:59 TvL2386 hi guys, I'm running glusterfs 3.4.2 and I am searching if it is possible to export a volume read-only for a set of ip addresses, and read-write for a different set of ip addresses
09:00 TvL2386 something like /etc/exports for nfsd
09:01 TvL2386 On the internet I only found that you can make a volume read only, but could not find any information on how to set it read-only for specific clients
09:10 vimal joined #gluster
09:12 mrEriksson joined #gluster
09:14 overclk_ joined #gluster
09:16 overclk_ joined #gluster
09:18 deepakcs joined #gluster
09:23 Fen1 After i made a partion (fdisk /dev/vda), it doesn't appear ? I should see vdaX (X=1-4), no ?
09:26 Norky doesn't appear where?
09:29 nthomas joined #gluster
09:34 haomaiwang joined #gluster
09:36 lalatenduM joined #gluster
09:42 saurabh joined #gluster
09:45 Fen1 no it's ok :p but when i do mkfs.xfs it say that my block is too small :(
09:46 msmith_ joined #gluster
09:49 LebedevRI joined #gluster
09:55 soumya_ joined #gluster
09:56 Fen1 agsize (256b) too small, need at least 4096 blocks
09:56 Fen1 any idea to solve it ?
09:57 capri joined #gluster
09:57 nthomas_ joined #gluster
10:02 Norky how big is the partition?
10:03 Norky Fen1, how big is the partition?
10:07 uebera|| joined #gluster
10:08 meghanam_ joined #gluster
10:10 meghanam joined #gluster
10:16 maxx2014 joined #gluster
10:17 uebera|| joined #gluster
10:18 Pupeno joined #gluster
10:18 maxx2014 hi! I'm running gluster 3.5.2 on centos 6.5 with 2 nodes, distributed. When looking at the mount log of one of the volumes, I'm seeing a lot of warning messages. so many actually, that our monitoring started alerting us about the size of the logs in /var because of gluster ;)
10:18 gomikemike joined #gluster
10:18 maxx2014 most of them are like this:
10:18 maxx2014 [2014-08-31 12:23:08.933907] W [dht-layout.c:179:dht_layout_search] 0-xfs-step1-dht: no subvolume for hash (value) = 4285753103
10:18 maxx2014 [2014-08-31 12:23:08.945972] I [dht-common.c:866:dht_lookup_everywhere_done] 0-xfs-step1-dht: cannot create linkfile file for /test/export/backup/etl01.p.static.original.image​s/2/7/3/3/wentronic-usb-mw-cb5-4in1-20902733.jpg on xfs-step1-client-0: hashed subvolume cannot be found.
10:18 maxx2014 Could anyone please tell me whether I need to do anything about this, or if they can be ignored?
10:19 capri joined #gluster
10:20 lalatenduM joined #gluster
10:22 Philambdo joined #gluster
10:31 karnan joined #gluster
10:47 msmith_ joined #gluster
10:51 diegows joined #gluster
10:57 maxx2014 anyone?
10:59 Slydder hey guys. got a strange situation here. I have a single gfs node (3.5.2) with a single volume mounted either per nfs or glusterfs where updating existing files hangs. adding new files and deleting existing files work as expected though. any ideas?
11:06 ur joined #gluster
11:10 ur hi, I've got issue with gluster where file locking stops working, when trying to acquire lock (fcntl(9, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) it hangs. the file was just created, nothing else is using that file. what can cause that?
11:11 lalatenduM joined #gluster
11:14 side_control joined #gluster
11:19 justinmburrous joined #gluster
11:25 harish joined #gluster
11:31 fattaneh1 joined #gluster
11:39 overclk joined #gluster
11:41 chirino joined #gluster
11:47 msmith_ joined #gluster
11:56 fattaneh joined #gluster
11:58 justinmburrous joined #gluster
12:07 sprachgenerator joined #gluster
12:08 aravindavk joined #gluster
12:08 mbukatov joined #gluster
12:10 Fen1 joined #gluster
12:11 Fen1 Norky: I juste follow the default setting so i don't know... :/
12:11 nshaikh joined #gluster
12:12 Norky use fdisk or parted to find out?
12:12 hagarth joined #gluster
12:20 virusuy joined #gluster
12:36 julim joined #gluster
12:36 chirino joined #gluster
12:42 Fen1 Norky: I use "fdisk -l" & "fdisk /dev/vda" & "n,p,enter,enter,enter"
12:42 Norky *head* *desk*
12:42 Norky run fdisk -l, it will tell you the size of the disk and the partitions contained within it
12:44 aravindavk joined #gluster
12:46 Slydder hey guys. got a strange situation here. I have a single gfs node (3.5.2) with a single volume mounted either per nfs or glusterfs where updating existing files hangs. adding new files and deleting existing files work as expected though. It doesn't matter if I mount the volume with fuse or nfs.
12:47 JoseBravo Hi guys, in my setup of gluster geo-replication 3.5.2 changes I made or deletes are not beign propagated to the slave. If I try to change the ingore-deletes I get an error (Reserved Option)
12:48 msmith joined #gluster
12:53 nshaikh joined #gluster
12:56 msmith joined #gluster
12:57 capri joined #gluster
12:58 coredump joined #gluster
13:02 justinmburrous joined #gluster
13:06 coredump joined #gluster
13:06 theron joined #gluster
13:11 capri joined #gluster
13:16 ppai joined #gluster
13:36 side_control joined #gluster
13:41 ricky-ti1 joined #gluster
13:47 h4rry joined #gluster
13:55 ricky-ticky joined #gluster
13:57 jaroug joined #gluster
14:01 tdasilva joined #gluster
14:03 justinmburrous joined #gluster
14:11 johnmwilliams__ joined #gluster
14:17 aravindavk joined #gluster
14:34 jaroug hi, I'm running gluster 3.3.2 on two nodes, 1 replicated volume, listing 25000 files in a subdirectory take up to 5 minutes (against 100ms on the local brick's fs), any tips to speed it up ?
14:36 chirino joined #gluster
14:36 plarsen joined #gluster
14:36 lmickh joined #gluster
14:36 Slydder jaroug: am having the same problem on gfs 3.5.2
14:37 Slydder have you tried mounting both fuse and nfs and testing which one is faster?
14:38 jaroug not yet
14:39 jaroug im going to :)
14:39 jaroug btw I was listing with a find . -type f
14:39 Fen1 joined #gluster
14:39 Slydder well. give both a shot first. by me it doesn't matter which I choose. both are painfully slow.
14:39 jaroug stace doesn't any special things, just that lstat consume 80% of the time
14:39 jaroug strace*
14:41 R0ok_ Slydder, jaroug: what about performance cache sizes? do you think that would have an effect on directory listing times ?
14:42 jaroug I tried to increase it from default value to 512M
14:42 jaroug doesn't change anything
14:42 Slydder caching has more write impact than read.
14:43 Slydder oh. except for the performance.cache-size. forgot about that one. that is a read cache.
14:45 R0ok_ Slydder: what about performance.io-cache & performance.readdir-ahead? i think they'd also affect the times
14:45 jaroug Slydder: ok, mounting it in nfsv3 take 8 seconds for listing the 25k files
14:47 jaroug R0ok_: performance.io-cache option is available in gluster 3.3 ?
14:48 fattaneh1 joined #gluster
14:50 sprachgenerator joined #gluster
14:52 R0ok_ jaroug: yea, performance.io-cache option is available in 3.3
14:52 R0ok_ jaroug: u can set it to on/off
14:53 jaroug ok
14:54 jaroug im surprised it's not specified in http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
14:54 nbalachandran joined #gluster
14:54 hagarth joined #gluster
14:59 theron_ joined #gluster
15:01 tryggvil joined #gluster
15:02 jaroug R0ok_: performance.io-cache speed it up, "only" 1 min now thx :), but performance.readdir-ahead seems to not exist in 3.3
15:07 bene2 joined #gluster
15:07 R0ok_ jaroug: i think you also might wanna do some tcp stack optimization in /etc/sysctl.conf
15:08 jaroug R0ok_: any references ?
15:12 jaroug ok, that page I guess http://www.gluster.org/community/docum​entation/index.php/Linux_Kernel_Tuning
15:12 jaroug thx
15:17 R0ok_ jaroug: yea i guess that's it, on our storage nodes we've do that in /etc/sysctl.conf http://ur1.ca/ia29w
15:18 jaroug cool, thx :)
15:18 daMaestro joined #gluster
15:18 lpabon joined #gluster
15:23 theron joined #gluster
15:23 sputnik13 joined #gluster
15:24 fubada purpleidea: I think my issue with puppet-gluster is related to puppet dynamic environments, have you tested this configuration
15:25 side_control joined #gluster
15:31 msmith joined #gluster
15:36 h4rry_ joined #gluster
15:53 JoseBravo I need some help with the problem with geo-replication any idea of why is that happening?
15:56 lalatenduM joined #gluster
15:56 semiosis JoseBravo: please describe the problem
16:02 lpabon joined #gluster
16:02 ira joined #gluster
16:03 JoseBravo In my setup of gluster geo-replication changes I made or deletes are not beign propagated to the slave. If I try to change the ingore-deletes I get an error (Reserved Option)
16:03 JoseBravo gluster volume geo-replication home-pi1-1 pi1-2::home-pi1-2 config ignore-deletes 1
16:03 JoseBravo Reserved option.
16:03 JoseBravo Now it's only replicating new files, if I edit a file it's not beign replicated. I tried with Changelog Crawl and Hybrid Crawl but with both I get the same problem
16:05 DV joined #gluster
16:10 MacWinner joined #gluster
16:12 tryggvil_ joined #gluster
16:12 _nothau joined #gluster
16:15 msvbhat_ joined #gluster
16:15 kodapa_ joined #gluster
16:15 m0zes_ joined #gluster
16:16 mkzero_ joined #gluster
16:17 primemin1sterp joined #gluster
16:17 mjrosenb joined #gluster
16:17 jiqiren joined #gluster
16:18 daMaestro joined #gluster
16:18 DV__ joined #gluster
16:18 semiosis JoseBravo: sorry i dont have any real experience with geo-rep.  hopefully someone else can help
16:18 semiosis i only set it up once but never really used it
16:19 JoseBravo thanks semiosis
16:19 frb joined #gluster
16:19 n-st joined #gluster
16:24 ricky-ticky1 joined #gluster
16:32 h4rry joined #gluster
16:34 Fen1 joined #gluster
16:37 zerick joined #gluster
16:47 diegows joined #gluster
16:51 msmith joined #gluster
16:57 jmarley joined #gluster
16:57 jmarley joined #gluster
17:01 msmith joined #gluster
17:02 doo joined #gluster
17:03 aravindavk joined #gluster
17:07 sputnik13 joined #gluster
17:12 aravindavk joined #gluster
17:20 sputnik13 joined #gluster
17:23 rwheeler joined #gluster
17:23 h4rry joined #gluster
17:27 kanagaraj joined #gluster
17:28 jobewan joined #gluster
17:35 Pupeno_ joined #gluster
17:39 mojibake joined #gluster
17:42 purpleidea fubada: i have not personally tested dynamic environments, but others have and this should affect how a module works assuming you set it up correctly...
17:50 PeterA joined #gluster
17:53 dtrainor_ joined #gluster
17:54 dtrainor_ joined #gluster
17:55 RaSTar joined #gluster
17:58 _Bryan_ joined #gluster
18:14 anoopcs joined #gluster
18:14 htrmeira joined #gluster
18:15 ekuric joined #gluster
18:30 soumya_ joined #gluster
18:31 Pupeno joined #gluster
18:33 jobewan joined #gluster
18:35 fubada purpleidea: i was able to use another module (stdlibs concat function) without issues, but under gluster still get undefined method. Can you think of anything else?
18:35 fubada pluginsync is working
18:37 elico joined #gluster
18:40 fubada purpleidea: i just got the module to work by not using gluster::simple
18:41 fubada gluster::server works, in its most basic use case
18:41 fubada https://gist.github.com/aa​merik/b742cf6c20c05f28389b
18:58 lpabon joined #gluster
19:00 n-st joined #gluster
19:00 dtrainor joined #gluster
19:02 purpleidea fubada: try using gluster::volume which actually calls that function
19:02 purpleidea fubada: as i said before, something is up with your setup
19:03 purpleidea 1) try the vagrant env
19:03 fubada purpleidea: im having luck so far
19:03 purpleidea 2) if all else fails, i can try a shell on your box...
19:03 purpleidea fubada: i don't know what to tell you. even #puppet thinks it's your box :( i'm sorry
19:03 purpleidea fubada: it's a puppet issue unless you can find out why otherwise
19:03 purpleidea (bbl)
19:03 fubada purpleidea: seems to be its just the ::simple
19:03 fubada https://gist.github.com/ano​nymous/8cedce8cfda1951c0279 works
19:04 purpleidea 15:07 < purpleidea> fubada: try using gluster::volume which actually calls that
19:04 purpleidea function
19:04 fubada got it
19:06 fubada SERVER: undefined method `brick_str_to_hash'
19:06 fubada heh i give up, thanks man
19:08 semiosis maybe puppet version issue?
19:09 semiosis fubada: what version of puppet?
19:09 fubada im latest 3.7.1 and purpleidea tested it with that ver using vagrant
19:09 semiosis ah
19:11 Pupeno_ joined #gluster
19:12 calum_ joined #gluster
19:14 coredump joined #gluster
19:22 cmtime I have some major corruption because of 3 different problems. Short story I get IO errors on two bricks.  The files are not on the brick but gluster thinks they are. About 4k files are messed up with split brain.
19:23 cmtime Any thoughts on the best way to fix the replica pair. So they are working well again with my 12 node setup.
19:25 tryggvil joined #gluster
19:38 scuttlemonkey_ joined #gluster
19:49 skippy Arrfab: ping?  Curious if you happened to recall the details of the "xfs_growfs results in no free space errors" link you mentioned to me yesterday
19:54 Pupeno joined #gluster
19:54 social joined #gluster
19:57 charta joined #gluster
20:08 PeterA joined #gluster
20:21 semiosis skippy: channel log links are in the /topic
20:21 semiosis oh wait you mean the link he couldn't remember
20:21 skippy yes
20:21 semiosis right
20:25 tryggvil joined #gluster
20:26 _dist joined #gluster
20:29 glusterbot joined #gluster
20:30 jbrooks Hey all -- in which log can I find info about gluster hooks -- whether and how they failed, for instance?
20:31 JoeJulian my first guess is glusterd.vol.log
20:34 JoeJulian yeah, that's where hooks logs should be.
20:34 JoeJulian according to the source...
20:36 jbrooks JoeJulian, I see it, thanks
20:36 Pupeno joined #gluster
20:39 virusuy joined #gluster
20:39 virusuy joined #gluster
20:43 Pupeno_ joined #gluster
20:44 sputnik13 joined #gluster
20:56 justinmburrous joined #gluster
20:58 glusterbot New news from resolvedglusterbugs: [Bug 978205] NFS mount failing for several volumes with 3.4.0 beta3. Only last one created can be mounted with NFS. <https://bugzilla.redhat.com/show_bug.cgi?id=978205> || [Bug 764655] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=764655>
20:58 glusterbot New news from newglusterbugs: [Bug 1117822] Tracker bug for GlusterFS 3.6.0 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117822> || [Bug 1148520] Memory leaks in ec while traversing directories <https://bugzilla.redhat.co​m/show_bug.cgi?id=1148520> || [Bug 1148521] Memory leaks in ec while traversing directories <https://bugzilla.redhat.co​m/show_bug.cgi?id=1148521> || [Bug 1148262] [gluster-nagios] Nagios plugins for vol
21:03 sputnik13 joined #gluster
21:09 sputnik13 joined #gluster
21:33 partner to make things simple, stupid question, what version should i today run on production?
21:35 semiosis just my opinion... the latest one that you validate does everything you need (including running under load)
21:35 partner yeah, i fully agree
21:36 partner i was almost happy with 3.3.2 but lets face the fact, it ain't getting better
21:37 partner so we upgraded to 3.4.5, brick ports changed and yet we are facing the rebalancing memory leak
21:37 partner thought i'm not sure why we want to rebalance in the first place, it will never complete
21:38 partner hence, i was thinking further and asking if there is community recommendation for "you should be running this"
21:44 partner as, its just enormously laborsome to mimic the production loads and most often its done already by plenty of people by the time i ask about it :o
21:52 gildub joined #gluster
21:58 semiosis partner: well, it pains me to say this, but my "community" recommendation is to not rebalance.
22:14 anotheral joined #gluster
22:14 anotheral In a distributed/replicated volume, how can I see which bricks are replicas of each-other?
22:24 semiosis anotheral: use 'gluster volume info' the bricks are reported in replica sets.  for example in a six-brick replica 3 volume the first three bricks are replicas of each other, just as the last three bricks are replicas of each other.
22:25 msmith joined #gluster
22:29 anotheral so that's always the case, even if I needed to replace a node or something?
22:30 anotheral in my 2x2, bricks 1 and 2 are replicas, ditto 3 & 4?
22:30 semiosis yep
22:30 anotheral thanks!
22:30 semiosis yw
22:38 MacWinner joined #gluster
23:02 tryggvil joined #gluster
23:06 sprachgenerator joined #gluster
23:28 justinmburrous joined #gluster
23:52 h4rry joined #gluster
23:54 msmith joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary