Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 badone_ joined #gluster
00:16 plarsen joined #gluster
00:26 cholcombe joined #gluster
00:38 calavera joined #gluster
00:44 DV joined #gluster
00:46 Pupeno joined #gluster
00:59 neoice joined #gluster
01:08 maveric_amitc_ joined #gluster
01:20 kdhananjay joined #gluster
01:28 lyang0 joined #gluster
01:45 raghug joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 PatNarciso joined #gluster
01:53 MugginsM joined #gluster
01:54 PatNarciso question: if a brick needed to be copied from one machine to another, w/o the use gluster -- would 'cp -aR' of the brick root be sufficient?
01:56 PatNarciso ... correction.  if a brick needed to be copied from one *disk* to another, w/o the use gluster -- would 'cp -aR' of the brick root be sufficient?
02:00 aaronott joined #gluster
02:00 wkf joined #gluster
02:04 CyrilPeponnet joined #gluster
02:05 CyrilPeponnet joined #gluster
02:06 gildub joined #gluster
02:07 MugginsM does anyone have a guide for doing a rolling upgrade of 3.4 -> 3.6 (or 3.7) ?
02:07 MugginsM it'd be really handy if we could, but I'm not clear on if it's possible
02:08 MugginsM replicated servers
02:11 nangthang joined #gluster
02:12 RedW joined #gluster
02:12 PatNarciso there is; and I found my 3.4->3.7 upgrade fairly easy.
02:15 PatNarciso one small thing, that I found to be an issue was setting the volume version number... for whatever reason, it was not updated during my upgrade.   it was only an issue when I attempted using some 3.7 functionality, and the volumes returned a 3.4 version number.
02:21 bharata-rao joined #gluster
02:21 MugginsM that doesn't sound too terrible
02:22 MugginsM best to do clients or servers first?
02:24 harish joined #gluster
02:26 PatNarciso docs suggest servers first.
02:26 PatNarciso https://gluster.readthedocs.org/en/lat​est/Upgrade-Guide/Upgrade%20to%203.7/
02:27 MugginsM oh yep
02:31 streetpizza joined #gluster
02:32 streetpizza left #gluster
02:32 streetpizza joined #gluster
02:33 streetpizza Hey all, what would be the maximum allowed latency between hosts sharing a replicated volume?
02:38 PatNarciso thats usually more application specific, right?
02:39 PatNarciso in one setup I had, Id allow it to go weeks... but thats because the data didnt need to be current.
02:41 julim joined #gluster
02:41 streetpizza So if I have two nodes participating in a replicated volume and they're in different data centers with a usual latency of 25-30 ms, that would be OK?
02:47 Pupeno joined #gluster
02:48 PatNarciso again - depends on the application.  ex: if the delay is acceptable for replicating... pictures, or video: then (in my opinion) sure.
02:53 PatNarciso pizza: I'm curious, whats the use case?    also, I'd suggest firing up two vps (london, and cali?) and giving it a try.
03:01 kevein joined #gluster
03:05 MugginsM joined #gluster
03:27 ppai joined #gluster
03:39 TheSeven joined #gluster
03:45 dusmant joined #gluster
03:51 atinm joined #gluster
03:54 sakshi joined #gluster
03:57 atinmu joined #gluster
04:00 itisravi joined #gluster
04:00 nangthang joined #gluster
04:03 RameshN joined #gluster
04:07 shubhendu joined #gluster
04:17 nbalacha joined #gluster
04:19 pppp joined #gluster
04:25 lexi2 joined #gluster
04:28 hagarth joined #gluster
04:30 free_amitc_ joined #gluster
04:34 RedW joined #gluster
04:38 sripathi joined #gluster
04:39 gem joined #gluster
04:42 anmolb joined #gluster
04:46 ramteid joined #gluster
04:47 theron joined #gluster
04:47 Pupeno joined #gluster
04:49 meghanam joined #gluster
04:56 vimal joined #gluster
05:05 spandit joined #gluster
05:08 ndarshan joined #gluster
05:10 ppai joined #gluster
05:13 MugginsM joined #gluster
05:14 deepakcs joined #gluster
05:19 SOLDIERz joined #gluster
05:23 Manikandan joined #gluster
05:28 ashiq joined #gluster
05:35 DV joined #gluster
05:36 rjoseph joined #gluster
05:40 rafi joined #gluster
05:42 atalur joined #gluster
05:43 Bhaskarakiran joined #gluster
05:52 kdhananjay joined #gluster
05:53 anrao joined #gluster
05:53 Pupeno joined #gluster
05:54 jiffin joined #gluster
05:56 dusmant joined #gluster
05:57 tigert joined #gluster
05:59 anrao joined #gluster
06:01 ashiq joined #gluster
06:05 raghu joined #gluster
06:06 prg3 joined #gluster
06:10 kotreshhr joined #gluster
06:13 anil joined #gluster
06:14 vmallika joined #gluster
06:15 PatNarciso joined #gluster
06:16 rgustafs joined #gluster
06:16 kdhananjay1 joined #gluster
06:24 spalai joined #gluster
06:28 vimal joined #gluster
06:29 atalur_ joined #gluster
06:35 jtux joined #gluster
06:36 jvandewege joined #gluster
06:39 schandra joined #gluster
06:41 karnan joined #gluster
06:44 nangthang joined #gluster
06:48 gem joined #gluster
06:51 Manikandan joined #gluster
06:51 glusterbot News from newglusterbugs: [Bug 1233273] 'unable to get transaction op-info' error seen in glusterd log while executing gluster volume status command <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233273>
06:54 sripathi joined #gluster
06:58 ramky joined #gluster
07:09 psilvao joined #gluster
07:09 Ulrar So I was thinking of something. Since my goal is to forbid certain server to replicate to other certain server, may be I could just create a big volume with the "common" servers. Then, on those server I can create a separate volume for each of the other servers and use the big common volume as storage for the brick ?
07:10 Ulrar Or would the performance be horrible ?
07:10 jiqiren_ joined #gluster
07:11 lanning_ joined #gluster
07:11 akay1 PatNarciso, did you do a rolling upgrade or schedule downtime?
07:11 kblin joined #gluster
07:11 kblin joined #gluster
07:11 ghenry joined #gluster
07:11 ghenry joined #gluster
07:13 atalur_ joined #gluster
07:13 gem joined #gluster
07:14 PatNarciso akay1: after 5pm friday, get it done before 8am monday... sadly thats how my 'scheduled' upgrades roll.
07:15 PatNarciso akay1: seriously tho... I never thought I'd be in the position to have multiple days of downtime due to a server/node being down...
07:16 akay1 :)
07:16 PatNarciso I have a production server currently rebuilding a 64tb md-raid6, and want to hang myself.
07:16 akay1 but did you take the volume offline to do the upgrade?
07:16 PatNarciso the last eta on rebuild is... 52 days.
07:16 PatNarciso I did.
07:16 PatNarciso and per docs, I stopped the volume.
07:17 akay1 haha im rebuilding an 80tb raid6 and its not too bad atm
07:17 akay1 ok... i know in the docs theres an option to do a rolling upgrade without taking it offline, but ive never done this. would be handy to do as i need to shut off tonnes of services to do upgrades
07:18 PatNarciso docs also suggest doing client updates AFTER upgrade.
07:18 akay1 how do you check the version number? ive done it before but i cant find the link
07:18 PatNarciso this... doesn't make sense to me.
07:18 akay1 i just took the volume offline then did both servers and clients
07:19 akay1 ah wait, found it :)
07:19 PatNarciso I found out about the version number mis-match by the client-log bitching up a storm.    also, I tried doing something 3.7 related... I think it was add tier.  and gluster cli complained by version number was not good enough.
07:20 PatNarciso 030400 ?
07:20 akay1 30702
07:20 PatNarciso nice!
07:20 SpComb ZFS dude
07:20 PatNarciso I didn't get that.  I had to set it.
07:21 akay1 ah ok
07:21 PatNarciso SoComb: ZFS.... tell me more.... I'm rocking XFS.  and... well... life sucks.
07:21 glusterbot News from newglusterbugs: [Bug 1238952] gf_msg_callingfn does not log the callers of the function in which it is called <https://bugzilla.redhat.co​m/show_bug.cgi?id=1238952>
07:21 akay1 XFS works well for me... looked at ZFS but not sure if mixing up bricks is a bad idea
07:22 PatNarciso akay1: I used xfs on each of my bricks.  and it got FUNKY at 70% usage.  which is much lower than most people report xfs 'funk'.  performance TANKED.
07:23 gem joined #gluster
07:23 bharata-rao joined #gluster
07:23 akay1 ive run mine up and above 99% and havent had a problem
07:23 [Enrico] joined #gluster
07:24 PatNarciso I chose against ZFS because... it felt too 'new' to me.  and some of their documentation felt, un-professional.
07:24 akay1 same as BTRFS which i liked the look of
07:24 PatNarciso but, 9 months later... ZFS looks much more attractive.
07:24 akay1 haha
07:25 PatNarciso BTRFS: holy crap- I had plenty of issues with over the past 2 years.
07:25 akay1 whats your brick/volume size?
07:27 PatNarciso 9(ish) bricks at 4.6TB.  total volume is 44TB.
07:27 PatNarciso ((10th brick is 2.8))
07:28 akay1 ah the 64tb isnt gluster?
07:28 PatNarciso and here is my ultimate failure: the 10 bricks is on a single md-raid6.
07:28 akay1 ooooooooooo ultimate failure alright
07:30 al joined #gluster
07:30 akay1 didnt have any more storage?
07:33 PatNarciso_ joined #gluster
07:33 PatNarciso_ bah, ping timeout.... what was my last msg?
07:33 PatNarciso_ @glusterbot: lastmsg?
07:33 PatNarciso_ ... k
07:34 PatNarciso >>   ... distributed volume, figured it would make offsite backups really easy.  sync each brick to a 5tb usb.  done deal.
07:34 PatNarciso >> was great until it wasn't.
07:34 PatNarciso >> once usage hit about 60%, IO went nuts... and I'm working to resolve this sin.
07:34 PatNarciso eof
07:35 akay1 damn that sucks. touch wood ive had minimal problems with xfs
07:35 akay1 but im guessing your problems mainly lie on that raid6
07:37 PatNarciso thats my guess as well.  instead of a read_dir going to 10 *machines*, its going to 10 partitions.  on the same md.  (I'm laughing as I type this)
07:39 akay1 aaaand im laughing as i read this :)
07:40 PatNarciso one day I will look back as I overlook a rack, or server-room full of gluster-nodes and say: wow-- I really did a lot with that single server...
07:40 glusterbot PatNarciso: wow's karma is now -1
07:40 akay1 we've got a couple of medium-large volumes and they run pretty well... not quite able to max out the 10gb network but fast enough for us
07:40 PatNarciso ... damnit glusterbot.
07:41 akay1 haha
07:41 PatNarciso what is a medium-large volume?
07:42 akay1 ~800TB
07:42 PatNarciso <3
07:43 PatNarciso one day... I'll be there.
07:43 akay1 itll be a whole lot easier soon with these big drives out
07:43 ctria joined #gluster
07:46 PatNarciso 6 years ago, I was on a netgear ReadyNAS 2tb...  moved to gluster about 6 months ago.
07:46 PatNarciso I have confidence in the stability of gluster.
07:47 akay1 ive had gluster in some form for about 1.5 years, before that was software-managed balancing between volumes
07:47 akay1 but that was a huge pita
07:47 PatNarciso what is in your 800TB?
07:48 akay1 i had a few problems with gluster mounts just disappearing and the other day with bricks just dropping off?? but able to fix it fairly easily
07:48 akay1 all spinning disk, but looking into tiering
07:48 PatNarciso I had bricks dropping off when I attempted tiering with 3.7.2 and Samba VFS connections.
07:49 akay1 i just changed to samba vfs from samba over the fuse mount so it could be related
07:49 PatNarciso samba vfs flat out killed gluster brick daemons upon file transfer connections.
07:49 akay1 strange
07:50 PatNarciso I switched back to samba using the fuse mount -- and was stable again.
07:50 PatNarciso BUT
07:50 akay1 the performance difference between the two is HUGE... i cant go back
07:50 PatNarciso if I had a video editor saving a file, say 40GB prores.mov, while another editor was performing a read_dir...
07:51 akay1 if theyre using the same mount point yes, but a different one, no
07:51 PatNarciso on the same dir;  the video xfer from editor1 would stop, and fileszie became 0.
07:52 PatNarciso yes, they're using the same mount points.
07:52 akay1 can you use a different one?
07:53 PatNarciso it would be a change in process... one that needs to occur at somepoint.
07:54 PatNarciso our editors cross reference a lot of the same material.   so same mountpoint helps.
07:54 kdhananjay joined #gluster
07:57 PatNarciso hmmm... I'm going to need to figure out how to give each editor their own mount point, huh?
07:59 PatNarciso ... perhaps under it, give a RO samba symlink to the full asset library...  no... even that is an os-x auto-indexing nightmare.
07:59 Slashman joined #gluster
08:06 PatNarciso akay1: I (and my team) appreciate our chat tonight.  please; keep in contact (email or twitter @patnarciso).
08:06 fsimonce joined #gluster
08:15 Trefex joined #gluster
08:15 sysconfig joined #gluster
08:17 kdhananjay joined #gluster
08:20 akay1 PatNarcisoZzZ: theres a few ways to do it but it should help your performance. our case is probably different because we have more nodes and separate storage but it couldnt hurt to give it a go... especially when youre seeing those problems like youre describing
08:22 akay1 likewise PatNarcisoZzZ, talk to you later
08:27 PatNarcisoZzZ akay1: twitter.com/akay1?
08:28 anmolb joined #gluster
08:29 shubhendu joined #gluster
08:30 ndarshan joined #gluster
08:41 atalur joined #gluster
08:44 cuqa__ joined #gluster
08:46 anmolb joined #gluster
08:47 shubhendu joined #gluster
08:48 raghug joined #gluster
08:58 ninkotech__ joined #gluster
08:59 [Enrico] joined #gluster
09:01 gem joined #gluster
09:04 MrAbaddon joined #gluster
09:04 deniszh joined #gluster
09:09 RameshN joined #gluster
09:15 MugginsM joined #gluster
09:20 autoditac joined #gluster
09:24 takarider joined #gluster
09:30 Saravana_ joined #gluster
09:33 takarider joined #gluster
09:43 anmolb joined #gluster
09:49 nsoffer joined #gluster
09:54 aravindavk joined #gluster
10:09 nbalacha joined #gluster
10:09 gem joined #gluster
10:12 milkyline joined #gluster
10:21 gem_ joined #gluster
10:23 shubhendu joined #gluster
10:23 gem_ joined #gluster
10:24 shubhendu joined #gluster
10:27 jiffin1 joined #gluster
10:27 rjoseph joined #gluster
10:32 Manikandan joined #gluster
10:32 haomaiwa_ joined #gluster
10:33 pppp joined #gluster
10:35 rajeshj joined #gluster
10:38 meghanam joined #gluster
10:41 kotreshhr joined #gluster
10:42 aravindavk joined #gluster
10:45 atalur joined #gluster
10:52 glusterbot News from newglusterbugs: [Bug 1239037] disperse: Wrong values for "cluster.heal-timeout" could be assigned using CLI <https://bugzilla.redhat.co​m/show_bug.cgi?id=1239037>
10:52 glusterbot News from resolvedglusterbugs: [Bug 1238811] Fix build on Mac OS X, config tools/glusterfind/Makefile(.am) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1238811>
10:53 atinmu joined #gluster
10:58 kotreshhr joined #gluster
10:59 jiffin joined #gluster
11:01 rajeshj joined #gluster
11:05 gem joined #gluster
11:09 atalur joined #gluster
11:15 gem joined #gluster
11:19 ChrisNBlum joined #gluster
11:19 LebedevRI joined #gluster
11:22 glusterbot News from newglusterbugs: [Bug 1239042] disperse: Wrong values for "cluster.heal-timeout" could be assigned using CLI <https://bugzilla.redhat.co​m/show_bug.cgi?id=1239042>
11:26 aravindavk joined #gluster
11:44 Manikandan joined #gluster
11:55 ppai joined #gluster
11:55 pppp joined #gluster
11:57 spalai1 joined #gluster
11:58 vmallika joined #gluster
12:07 rajeshj joined #gluster
12:10 jiffin1 joined #gluster
12:10 jtux joined #gluster
12:15 Susant left #gluster
12:16 atinmu joined #gluster
12:17 pkoro joined #gluster
12:22 kotreshhr joined #gluster
12:22 shubhendu joined #gluster
12:24 itisravi joined #gluster
12:26 kotreshhr left #gluster
12:33 theron joined #gluster
12:55 Ulrar joined #gluster
13:09 raghug joined #gluster
13:20 kovshenin joined #gluster
13:28 curratore joined #gluster
13:30 psilvao Hi again, in gluster i get this following message: [2015-07-02 21:19:41.033912] C [dht-diskusage.c:242:dht_is_subvol_filled] 0-data1-dht: inodes on subvolume 'data1-client-0' are at (96.00 %), consider adding more nodes, Question: how i can add more nodes?
13:34 mator gluster peer probe
13:35 mator gluster vol add-brick
13:35 dgandhi joined #gluster
13:51 psilvao mator: thanks,
14:02 gnudna joined #gluster
14:02 gnudna left #gluster
14:17 sacrelege joined #gluster
14:17 sacrelege joined #gluster
14:23 glusterbot News from newglusterbugs: [Bug 1239109] geo-rep session goes faulty when symlinks copied in archive mode <https://bugzilla.redhat.co​m/show_bug.cgi?id=1239109>
14:28 ekuric joined #gluster
14:31 jcastill1 joined #gluster
14:36 jcastillo joined #gluster
14:39 _shaps_ joined #gluster
14:42 ron-slc joined #gluster
14:49 kdhananjay joined #gluster
14:56 soumya joined #gluster
14:59 shubhendu joined #gluster
15:00 jcastill1 joined #gluster
15:05 jcastillo joined #gluster
15:28 dusmant joined #gluster
15:29 kovshenin joined #gluster
15:51 plarsen joined #gluster
16:02 maZtah joined #gluster
16:06 theron joined #gluster
16:08 gem joined #gluster
16:17 mribeirodantas joined #gluster
16:19 Lee- joined #gluster
16:24 curratore left #gluster
16:27 nage joined #gluster
16:34 harish joined #gluster
16:37 rotbeard joined #gluster
16:39 raghug joined #gluster
16:41 blonkel joined #gluster
16:41 blonkel hey one of my bricks crashed and i want to restore it
16:42 blonkel im running on gluster 3.7 but i only found this guide for 3.4: http://www.gluster.org/community/docum​entation/index.php/Gluster_3.4:_Brick_​Restoration_-_Replace_Crashed_Server (which doesnt work)
16:42 blonkel any idears working guides out there?
16:42 corretico joined #gluster
16:43 kovshenin joined #gluster
16:46 soumya joined #gluster
16:48 vovcia joined #gluster
16:50 kovshenin joined #gluster
17:11 raghug joined #gluster
17:15 pppp joined #gluster
17:17 fandi joined #gluster
17:20 kovshenin joined #gluster
17:24 glusterbot News from newglusterbugs: [Bug 1232678] Disperse volume : data corruption with appending writes in 8+4 config <https://bugzilla.redhat.co​m/show_bug.cgi?id=1232678>
18:02 PatNarcisoZzZ joined #gluster
18:06 haomaiwang joined #gluster
18:17 haomaiwa_ joined #gluster
18:21 Peppard joined #gluster
18:23 fandi joined #gluster
18:32 and` joined #gluster
18:34 mckaymatt joined #gluster
18:56 mckaymatt joined #gluster
19:02 calum_ joined #gluster
19:24 glusterbot News from newglusterbugs: [Bug 1239156] Glusterd crashed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1239156>
19:25 blonkel im running on gluster 3.7 but i only found this guide for 3.4: http://www.gluster.org/community/docum​entation/index.php/Gluster_3.4:_Brick_​Restoration_-_Replace_Crashed_Server (which doesnt work)01
19:25 blonkel any working guides out there?..
19:32 Pupeno joined #gluster
19:44 blonkel glusterfs is spawning on the replaced server not nfs / shd daemon =(
19:44 blonkel even if i restart it, and it looks like theres no log entry about this situation..
19:47 abyss^ joined #gluster
20:10 DV joined #gluster
20:12 plarsen joined #gluster
20:19 Arrfab joined #gluster
20:28 fandi joined #gluster
20:37 _nixpanic joined #gluster
20:37 _nixpanic joined #gluster
20:38 TheCthulhu4 joined #gluster
21:13 PatNarciso blonkel, for what its worth: my NFS fails with Gluster 3.7.2.
21:13 PatNarciso blonkel, during a 'gluster v status', the enabled NFS would always be down.
21:14 PatNarciso and -- I'm going from memory now... I remember I had to disable it, as each volume config change I made -- would error, suggesting the commit didn't occur.
21:16 PatNarciso I found it was erroring, because gluster was expecting NFS to (do something?).  and with it not working, the config changes were questionable.
21:16 PatNarciso SO - I disable nfs, and I was able to move forward.
21:18 PatNarciso blonkel: did you try starting gluster in the foreground to watch for debug messages?
21:31 nsoffer joined #gluster
21:33 Pupeno joined #gluster
21:34 hagarth joined #gluster
21:43 cornus_ammonis joined #gluster
22:20 kovsheni_ joined #gluster
22:41 Ulrar joined #gluster
23:05 doekia joined #gluster
23:13 woakes0700 joined #gluster
23:32 vovcia joined #gluster
23:33 tdasilva joined #gluster
23:34 Pupeno joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary