Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 JoeJulian _polto_: yes, just need to add the stripe count and change the replica count during the add-brick, for instance: gluster volume add-brick myvol replica 3 server{11,12,13,14,15}:/gluster/brick"
00:08 JoeJulian assuming a 5 stripe volume.
00:08 Intensity joined #gluster
00:13 badone_ joined #gluster
00:13 _polto_ JoeJulian: thanks !
00:19 SOLDIERz joined #gluster
00:26 MacWinner joined #gluster
00:41 joseki joined #gluster
00:47 MugginsM joined #gluster
00:50 MugginsM joined #gluster
00:57 MugginsM joined #gluster
01:31 sprachgenerator joined #gluster
01:32 SOLDIERz joined #gluster
02:10 calum_ joined #gluster
02:18 zerick_ joined #gluster
02:32 _polto_ joined #gluster
02:33 SOLDIERz joined #gluster
02:37 bala joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:08 bharata-rao joined #gluster
03:34 SOLDIERz joined #gluster
03:43 spandit joined #gluster
03:54 shubhendu joined #gluster
03:56 itisravi joined #gluster
04:05 RameshN joined #gluster
04:05 ppai joined #gluster
04:06 kanagaraj joined #gluster
04:08 ndarshan joined #gluster
04:10 atinmu joined #gluster
04:12 bala joined #gluster
04:20 dgandhi joined #gluster
04:30 MacWinner joined #gluster
04:35 nbalacha joined #gluster
04:37 jiffin joined #gluster
04:38 Manikandan joined #gluster
04:38 SOLDIERz joined #gluster
04:39 anoopcs joined #gluster
04:44 ppai joined #gluster
04:57 rjoseph|afk joined #gluster
05:03 aravindavk joined #gluster
05:18 anil_ joined #gluster
05:20 prasanth_ joined #gluster
05:23 hagarth joined #gluster
05:23 kdhananjay joined #gluster
05:25 soumya__ joined #gluster
05:31 kshlm joined #gluster
05:31 hchiramm joined #gluster
05:33 badone__ joined #gluster
05:37 overclk joined #gluster
05:42 meghanam joined #gluster
05:43 SOLDIERz joined #gluster
06:00 deepakcs joined #gluster
06:01 ramteid joined #gluster
06:09 atalur joined #gluster
06:15 rafi joined #gluster
06:16 spandit joined #gluster
06:17 kumar joined #gluster
06:23 doekia joined #gluster
06:28 rafi1 joined #gluster
06:28 smohan joined #gluster
06:31 ppai joined #gluster
06:33 msvbhat joined #gluster
06:33 raghu` joined #gluster
06:40 nbalacha joined #gluster
06:45 overclk joined #gluster
06:47 SOLDIERz joined #gluster
06:51 Manikandan joined #gluster
06:53 kovshenin joined #gluster
06:54 gem joined #gluster
06:56 schandra joined #gluster
07:01 soumya__ joined #gluster
07:05 schandra_ joined #gluster
07:06 nshaikh joined #gluster
07:07 smohan_ joined #gluster
07:10 kovshenin joined #gluster
07:12 mbukatov joined #gluster
07:16 jestan joined #gluster
07:16 anrao joined #gluster
07:17 glusterbot News from newglusterbugs: [Bug 1188184] Tracker bug :  NFS-Ganesha new features support for  3.7. <https://bugzilla.redhat.com/show_bug.cgi?id=1188184>
07:17 glusterbot News from newglusterbugs: [Bug 1193767] [Quota] : gluster quota list does not show proper output if executed within few seconds of glusterd restart <https://bugzilla.redhat.com/show_bug.cgi?id=1193767>
07:20 jtux joined #gluster
07:22 spandit joined #gluster
07:26 bala joined #gluster
07:29 jestan Hi, I have a gluster replica with 2 bricks and 2 native client mounts on the same servers. When I reboot one of the brick servers, other brick server's mount is not accessible temporally, after few seconds only it resumes for reading or writing. Any thoughts on how to avoid this issue? (reducing network.ping-timeout to 1 second give better results, but still the issue there for 1 second)
07:32 atalur joined #gluster
07:35 awerner joined #gluster
07:41 jestan joined #gluster
07:42 Philambdo joined #gluster
07:44 rafi joined #gluster
07:48 _zerick_ joined #gluster
07:48 bala joined #gluster
07:50 overclk joined #gluster
07:52 SOLDIERz joined #gluster
08:00 nbalacha joined #gluster
08:01 deniszh joined #gluster
08:02 ntt Hi, i'm trying to change the hostname of a peer (from ip to hostname). When i do "gluster peer status" i have hostname = 10.0.0.1. I've done "gluster peer probe gstorage1" (gstorage1 = 10.0.0.1) but now i have hostname = 10.0.0.1 and another field named "other names = gstorage1". Is this normal? In /var/lib/glusterd/peers/<uuid> i have 2 field, hostname1 and hostname2 where 1= ip and 2 = gstorage1.
08:17 glusterbot News from resolvedglusterbugs: [Bug 1032894] spurious ENOENTs when using libgfapi <https://bugzilla.redhat.com/show_bug.cgi?id=1032894>
08:27 bjornar joined #gluster
08:27 atalur joined #gluster
08:29 _NiC joined #gluster
08:34 [Enrico] joined #gluster
08:34 _polto_ joined #gluster
08:36 schandra__ joined #gluster
08:38 cornus_ammonis joined #gluster
08:40 jtux joined #gluster
08:41 javi404 joined #gluster
08:41 atrius` joined #gluster
08:48 rafi1 joined #gluster
08:55 SOLDIERz joined #gluster
08:56 karnan joined #gluster
09:01 awerner joined #gluster
09:04 fsimonce joined #gluster
09:09 kumar joined #gluster
09:10 ndarshan joined #gluster
09:14 LebedevRI joined #gluster
09:16 ppai joined #gluster
09:17 Norky joined #gluster
09:20 dusmant joined #gluster
09:24 badone_ joined #gluster
09:27 Debloper joined #gluster
09:27 Debloper joined #gluster
09:34 smohan joined #gluster
09:38 Manikandan joined #gluster
09:38 afics joined #gluster
09:39 [Enrico] joined #gluster
09:43 shaunm joined #gluster
09:45 rafi joined #gluster
09:48 glusterbot News from resolvedglusterbugs: [Bug 1185950] adding replication to a distributed volume makes the volume unavailable <https://bugzilla.redhat.com/show_bug.cgi?id=1185950>
09:48 meghanam joined #gluster
09:50 soumya__ joined #gluster
09:57 SOLDIERz joined #gluster
09:59 _br_ joined #gluster
10:02 badone__ joined #gluster
10:05 elico joined #gluster
10:09 maveric_amitc_ joined #gluster
10:22 R0ok_ joined #gluster
10:25 coredump joined #gluster
10:25 stickyboy joined #gluster
10:27 tanuck joined #gluster
10:29 xaeth joined #gluster
10:34 bene2 joined #gluster
10:37 Slashman joined #gluster
10:45 kapsel joined #gluster
10:48 liquidat joined #gluster
10:49 itpings just killed my gluster server and fixing it
10:52 yosafbridge joined #gluster
10:53 raz joined #gluster
10:53 _polto_ joined #gluster
10:54 raz hrmm.. this is not funny..
10:55 itpings yes i know
10:55 itpings buts its a test server
10:55 itpings now its giving me error
10:56 itpings its not probing its peer
10:56 itpings firewall is off
10:56 itpings daemons are running
10:56 raz oh sorry, i was talking about my own problem :)
10:56 itpings lol i thought you were...
10:56 itpings anyway
10:59 itpings from one side probe is fine
11:00 ndarshan joined #gluster
11:00 SOLDIERz joined #gluster
11:03 itpings after restart its probing now
11:10 RicardoSSP joined #gluster
11:10 RicardoSSP joined #gluster
11:13 kbyrne joined #gluster
11:17 awerner joined #gluster
11:19 itpings need some help
11:24 kovshenin joined #gluster
11:25 meghanam joined #gluster
11:28 gothos Hey, is there any plan to implement file attributes in glusterfs? (ie. lsattr, chattr stuff)
11:36 st_ joined #gluster
11:38 aravindavk joined #gluster
11:39 ndevos REMINDER: Gluster Community Meeting starts in ~20 minutes in #gluster-meeting
11:42 itpings how to set already created volume
11:42 itpings do i add brick ?
11:44 st_ Hi all! I recently encounter a strange issue with gluster. We have simple 2 node brick replication of gluster. One of our users generate a huge amount of very small files (about 170k). Unfortunately it was to much for gluster causing an timeout and some zombie processes even for listing that folder. Even for that only thing to reclaim that volume was to restart it on server with whole glusterfs processes. Now we trying to figure out how to delete those
11:44 st_ files without downtime for this volume.
11:45 st_ We want to create a copy of that volume and rsync this without that folder, then disable it and manually delete it on server brick with some low ionice.
11:46 st_ I'm not exacly sure of a behavior of disabled volumes in gluster... if i manually delete those files will it trying to replicate it from another server? Is there a way to force it not to do it in this case?
11:48 ndevos gothos: hmm, dont those attributes work? nobody ever complained about it, I think
11:53 gothos ndevos: I'm getting: lsattr: Function not implemented While reading flags on $directory
11:53 gothos https://bugzilla.redhat.com/show_bug.cgi?id=762410 is a bug that discusses this and all our kernels are >=2.6.32
11:53 glusterbot Bug 762410: low, low, ---, csaba, CLOSED WONTFIX, chattr: Function not implemented
11:54 ndevos gothos: hmm, yeah, I get that too :-/
11:55 gothos I just found that since we would like to restrict user shell histories on glusterfs to append only (ie. chattr +a)
11:56 ndevos gothos: you can file a bug for it (a new one), I'm pretty sure other users would like to see that functioning as well
11:56 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
11:57 soumya__ joined #gluster
11:57 gothos ndevos: I'll do that as soon as my workload allows it :)
11:57 ndevos gothos: ok :)
12:01 SOLDIERz joined #gluster
12:02 tanuck joined #gluster
12:02 ndevos REMINDER: Gluster Community Meeting starts *now* in #gluster-meeting
12:08 jiffin joined #gluster
12:09 st_ Or maybe is there a way to delete files directly on brick in volume directory on running volume?
12:12 ira joined #gluster
12:19 st_ I'm sorry that I asking so much questions but I have another idea. Can I sync data directly from server (gluster dir) to new volume omitting mounting old volume (preventing listing files through client?
12:19 st_ Thanks in advance for any help or ideas.
12:20 itisravi joined #gluster
12:22 jdarcy joined #gluster
12:27 bene2 joined #gluster
12:30 kdhananjay joined #gluster
12:35 ira joined #gluster
12:52 bennyturns joined #gluster
12:54 rafi1 joined #gluster
12:54 calisto joined #gluster
12:55 bennyturns joined #gluster
13:00 anoopcs joined #gluster
13:04 SOLDIERz joined #gluster
13:05 rafi joined #gluster
13:12 shaunm joined #gluster
13:19 rjoseph joined #gluster
13:21 st_ While rsync between volumes is there a need to rsync also .glusterfs dir with all metadata? Or this will generate itself on destination volume while syncing and even I need to omit it?
13:32 calisto joined #gluster
13:45 virusuy joined #gluster
13:45 virusuy joined #gluster
13:46 jmarley joined #gluster
13:47 bala joined #gluster
13:48 RameshN joined #gluster
13:49 anrao joined #gluster
13:51 devilspgd joined #gluster
13:51 dblack joined #gluster
13:52 calisto joined #gluster
14:00 awerner joined #gluster
14:08 SOLDIERz joined #gluster
14:09 RicardoSSP joined #gluster
14:09 RicardoSSP joined #gluster
14:13 T3 joined #gluster
14:15 meghanam joined #gluster
14:20 dgandhi joined #gluster
14:21 meghanam_ joined #gluster
14:23 delhage joined #gluster
14:29 wkf joined #gluster
14:30 nbalacha joined #gluster
14:30 rjoseph joined #gluster
14:31 ninkotech joined #gluster
14:31 ninkotech_ joined #gluster
14:34 georgeh-LT2 joined #gluster
14:34 shubhendu joined #gluster
14:38 anrao joined #gluster
14:40 _polto_ joined #gluster
14:42 calisto joined #gluster
14:43 awerner joined #gluster
14:47 ildefonso joined #gluster
14:50 _Bryan_ joined #gluster
14:53 mrEriksson joined #gluster
14:55 B21956 left #gluster
14:55 tdasilva joined #gluster
14:56 B21956 joined #gluster
15:05 soumya joined #gluster
15:13 shubhendu joined #gluster
15:15 xavih joined #gluster
15:19 glusterbot News from newglusterbugs: [Bug 1193929] GlusterFS can be improved <https://bugzilla.redhat.com/show_bug.cgi?id=1193929>
15:21 wushudoin joined #gluster
15:23 malevolent joined #gluster
15:24 dbruhn joined #gluster
15:25 plarsen joined #gluster
15:32 ghenry joined #gluster
15:32 ghenry joined #gluster
15:37 kshlm joined #gluster
15:46 bene2 joined #gluster
15:49 meghanam joined #gluster
15:50 meghanam joined #gluster
15:51 coredump joined #gluster
15:57 firemanxbr joined #gluster
16:03 ndevos JustinClift: meet firemanxbr - and firemanxbr, meet JustinClift
16:04 deepakcs joined #gluster
16:04 ndevos I hope you have a great time hashing out any Gerrit update plans/ideas :D
16:04 firemanxbr ndevos hey guy :)
16:04 firemanxbr i'm here :D
16:04 ndevos firemanxbr: hehe, yeah, I just noticed :D
16:05 T3 joined #gluster
16:05 ndevos firemanxbr: JustinClift is the one that know most about the Gerrit instance, and he would be the main contact for getting access to VMs for testing and all
16:06 rafi joined #gluster
16:06 firemanxbr ndevos I'm writing a documentation about Gerrit 2.10, how-to install or update :)
16:06 ndevos firemanxbr: oh, that sounds really helpful already!
16:08 firemanxbr ndevos you have access on new VM's for update ? I can upgrade, style: POC,  for project.
16:08 ndevos firemanxbr: I could setup a VM, but I do not have access to the Gerrit system to get you a copy/backup of the database
16:09 ndevos firemanxbr: JustinClift has more power and promised to help you with everything you neeed
16:09 firemanxbr ndevos,  okay no problem, I'm waiting :)
16:10 ndevos if you say his name 3x, he might appear? JustinClift JustinClift JustinClift
16:10 firemanxbr lol :D
16:11 firemanxbr in my work IRC is blocked :(, I'm accessing for my container in OpenShift :D
16:11 firemanxbr but is very slow :]
16:13 ndevos have you tried https://webchat.freenode.net/?channels=gluster ?
16:14 firemanxbr ndevos, blocked too, I'm using: https://try.waartaa.com
16:14 firemanxbr :)
16:14 ndevos oh, wow
16:15 firemanxbr ndevos,  I hat proxy's :D
16:15 firemanxbr *hate
16:15 ndevos yeah, thats very inconvenient
16:17 JustinClift Heh, reading back
16:17 JustinClift Addd firemanxbr, you're the guy Niels has been talking about
16:17 JustinClift s/Addd/Ahhh/
16:17 glusterbot What JustinClift meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
16:18 * ndevos laughs at glusterbot
16:18 JustinClift semiosis: ^ Awesome
16:18 pdrakeweb joined #gluster
16:19 glusterbot News from newglusterbugs: [Bug 1193970] Fix spurious ssl-authz.t regression failure (backport) <https://bugzilla.redhat.com/show_bug.cgi?id=1193970>
16:25 bene3 joined #gluster
16:26 cmorandin joined #gluster
16:34 hagarth joined #gluster
16:35 kovshenin joined #gluster
16:37 T3 joined #gluster
16:42 kovshenin joined #gluster
16:42 gem joined #gluster
16:44 rafi joined #gluster
16:49 rafi joined #gluster
16:55 Pupeno joined #gluster
16:55 Pupeno joined #gluster
16:55 elico joined #gluster
16:56 social joined #gluster
17:18 jobewan joined #gluster
17:24 bfoster joined #gluster
17:26 tru_tru joined #gluster
17:43 bennyturns joined #gluster
17:52 rafi joined #gluster
17:57 chirino joined #gluster
18:04 Rapture joined #gluster
18:12 m0zes joined #gluster
18:16 papamoose joined #gluster
18:21 elico joined #gluster
18:48 SOLDIERz joined #gluster
19:12 JoeJulian semiosis: Just a nudge regarding bug 1113778
19:12 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1113778 medium, unspecified, ---, pkarampu, ASSIGNED , gluster volume heal info keep reports "Volume heal failed"
19:14 semiosis JoeJulian++
19:14 glusterbot semiosis: JoeJulian's karma is now 20
19:17 MacWinner joined #gluster
19:26 Pupeno_ joined #gluster
19:34 aulait joined #gluster
19:37 alan^ joined #gluster
19:39 alan^ JoeJulian: hi, the other week you told me to use the CLI for configuring volumes instead of editing vol files. I'm trying to edit the performance.cache-size to be much larger on the client mounting the volume than on the server side. Can the CLI configure that?
19:41 JoeJulian alan^: "gluster volume set help" will show you all the settings that can be changed through the CLI and, yes, that is one of them.
19:42 alan^ No, I mean... when I tried to change it, it set it for both server side and client side. I wanted the client to mount with a different size than the server.
19:43 alan^ On the client side I'm mounting the lazy way with the mount.glusterfs command
19:43 semiosis ,,(options)
19:43 glusterbot See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
19:44 Pupeno joined #gluster
19:45 alan^ One time I set cache-size to 10GB and when I made a virtual machine with like 4GB, it failed to mount
19:45 semiosis why do you need to set this cache size?
19:46 alan^ Performance? I have unused RAM that could be used for caching.
19:46 jdarcy joined #gluster
19:46 alan^ It seemed to help with latency when listing directories and stuff like that
19:49 alan^ Maybe I should look at the quick-read translator instead of the io-cache
19:51 alan^ io-cache seems to be what I want though, if I have 10 machines suddenly trying to read the same large file
19:52 semiosis alan^: the kernel page cache on the server will already cache data read by multiple clients.  you should look into tuning your kernel
19:53 semiosis @kernel
19:53 glusterbot semiosis: I do not know about 'kernel', but I do know about these similar topics: 'kernel tuning'
19:53 semiosis @kernel tuning
19:53 glusterbot semiosis: https://www.gluster.org/community/documentation/index.php/Linux_Kernel_Tuning
19:53 alan^ Thanks, I'll read up on it.
19:53 semiosis yw
19:56 alan^ Another question, unrelated to translators: so in my cloud replica-count-2 distributed storage pool I'm increasing the disk size by reimaging a brick to a new larger disk... I bring a node down, swap disks, remount and start the daemon. All good, it starts doing a balance on a single brick (for healing purposes I guess.) My question is, while this single-brick rebalance/heal is going, is it safe if I start doing the other pairs?
19:56 alan^ As long as I don't do the sibling of the one currently rebalancing, it should be safe, right?
19:56 semiosis heal is the proper term.  rebalance is something else unrelated to what you're doing
19:57 alan^ ok but gluster volume status labels it as "rebalance", but I did not trigger it.
19:57 semiosis yes it's safe.  i would grow several bricks at once.
19:57 semiosis hmm interesting, can you pastie.org the output?
19:58 Slasheri_ joined #gluster
19:58 alan^ sure, one sec
20:01 alan^ http://pastie.org/private/dbfr1zcufslthxwuemffq
20:01 alan^ The rebalance stats from the "completed" bricks are from a previous rebalance weeks ago
20:01 alan^ but as you can see, it shows one brick is rebalancing
20:01 alan^ it triggered by itself.
20:02 semiosis well that's new to me.  i've never done a rebalance but have done lots of brick replacements and never seen a rebalance start on its own
20:03 alan^ last time I brought a node down for maintenance while the others were running I could've sworn it triggered a heal
20:03 semiosis heal yes, rebalance no
20:05 alan^ gluster can only do 1 task at any given time, right? my worry is if I bring another node down when I bring it back it won't heal properly
20:05 alan^ if I do it while this single brick rebalance is going
20:05 semiosis yeah, that rebalance scares me.  if it were just a simple heal, you could have many going on at once
20:06 alan^ should I stop it? lol
20:06 semiosis but if it's really a rebalance, then i dont know what to tell you
20:06 alan^ Is there a way to heal specific bricks instead of the whole volume?
20:06 semiosis ,,(targeted self heal)
20:06 glusterbot https://web.archive.org/web/20130314122636/http://community.gluster.org/a/howto-targeted-self-heal-repairing-less-than-the-whole-volume/
20:08 alan^ I'm trying to resize the whole volume so it's all even... Do you think I should stop any automatic heals that happen, do a node then do this statting heal trick?
20:09 semiosis a minute ago you were trying to grow your bricks
20:09 semiosis what are you really up to?
20:10 alan^ I am! lol -- so I have 12 nodes, with one 3TB brick in each
20:10 alan^ I want to make them all 6TB
20:10 alan^ 6TB per brick
20:10 alan^ and I have a replica-count of 2, so there's redundancy
20:11 semiosis great, then just replace each brick and let it heal.
20:11 alan^ but do I have to let it heal after each brick swap or can I do multiples simultaneously?
20:11 semiosis you can do multiple simultaneously as long as you do one side of replication before the other
20:13 alan^ so since I have a replica count of 2, if I have node1, node2, node3 and node4, I could bring down 1 and 3 to do the disk swap and it'd be safe because their sibling is still up? then when the heals stopped I would do the other side.
20:15 SOLDIERz joined #gluster
20:15 semiosis pretty much
20:16 Pupeno_ joined #gluster
20:17 fandi joined #gluster
20:20 fandi joined #gluster
20:26 Pupeno joined #gluster
20:27 deniszh joined #gluster
20:35 alan^ cool, thanks for clarifying. I've been doing it brick by brick and it's taking ages
20:52 Pupeno_ joined #gluster
21:00 Gill joined #gluster
21:00 haomai___ joined #gluster
21:01 deniszh joined #gluster
21:10 Pupeno joined #gluster
21:10 firemanxbr_ joined #gluster
21:15 firemanxbr_ joined #gluster
21:16 firemanxbr_brb joined #gluster
21:26 bene3 joined #gluster
21:28 Pupeno_ joined #gluster
21:47 elico left #gluster
21:52 shaunm joined #gluster
21:52 MugginsM joined #gluster
22:12 Gill joined #gluster
22:21 glusterbot News from newglusterbugs: [Bug 1057295] glusterfs doesn't include firewalld rules <https://bugzilla.redhat.com/show_bug.cgi?id=1057295>
22:21 glusterbot News from newglusterbugs: [Bug 883785] RFE: Make glusterfs work with FSCache tools <https://bugzilla.redhat.com/show_bug.cgi?id=883785>
22:23 theron joined #gluster
22:24 jackdpeterson2 joined #gluster
22:25 dgandhi joined #gluster
22:27 dgandhi joined #gluster
22:30 wkf joined #gluster
22:38 badone_ joined #gluster
23:13 Pupeno joined #gluster
23:13 Pupeno joined #gluster
23:21 plarsen joined #gluster
23:23 gildub joined #gluster
23:24 badone_ joined #gluster
23:25 bennyturns joined #gluster
23:29 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary