Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 saltlake joined #gluster
00:00 quique JoeJulian@ the volume is really big
00:01 quique JoeJulian if i snapshotting it
00:01 quique from gluster1
00:01 quique and mounted it on gluster2 and gluster3
00:01 quique and removed them one by one
00:01 quique and added them
00:02 quique would it work?
00:02 quique like mounted a copy
00:02 quique on gluster2 and gluster3
00:02 quique as their bricks?
00:07 JoeJulian no
00:07 quique that wouldn't work?
00:07 quique because you're assume there are changes
00:08 quique in the other nodes that aren't on gluster1
00:08 JoeJulian That wouldn't work because of the ,,(extended attributes)
00:08 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
00:13 daMaestro joined #gluster
00:25 pelox joined #gluster
00:29 T3 joined #gluster
00:52 T3 joined #gluster
01:02 topshare joined #gluster
01:06 gildub joined #gluster
01:14 elitecoder joined #gluster
01:17 devilspgd joined #gluster
01:21 jmarley joined #gluster
01:23 pdrakeweb joined #gluster
01:24 topshare_ joined #gluster
01:27 bala joined #gluster
01:31 punit_ hi i want to ask one question
01:32 punit_ i want to deploy the glusterfs for the production purpose
01:33 punit_ i have 4 servers...every server can host 24 SSD disk
01:33 ira joined #gluster
01:33 punit_ i want to deploy distributed replicated storage with replica=2
01:33 punit_ without Hardware RAID
01:40 RicardoSSP joined #gluster
01:46 JoeJulian punit_: There's no question to answer.
01:49 punit_ JoeJulian: I want to use Glusterfs with Ovirt 3.5...please help me to make the architecture stable for the production use :- I have 4 servers...every server can host 24 SSD disk(As bricks)..i want to deploy distributed replicated storage with replica =2....i don't want to use the Hardware RAID...as i think it will badly impact the performance...1. Glusterfs 3.5 or 3.6 ?? (which one will be stable for the production use).2. Do i use the Hardware
01:49 punit_ RAID or Not ??3. IF HW RAID then which RAID level and does it impact the performance...4. I want to make it rock solid...so it can use for production purpose...5. How much RAM should be sufficient on each server...on the each server i have two E5 CPU's...6. For Network Connectivity i have 2*10G NIC with bonding on each server...
01:50 JoeJulian I can't imaging you're looking for a performance gain from raid10, so probably brick-per-ssd
01:50 JoeJulian And 3.5 is the current production suggestion
01:51 punit_ JoeJulian: but if i use the HW RAID does it impact the performance
01:52 JoeJulian Depends on your hardware. Configuring to keep your iops high is usually best, depending on your use case, of course.
01:52 JoeJulian but don't get stuck comparing apples to orchards.
01:53 punit_ JoeJulian: as i have all SSD in my architure...and the glusterfs distributed replicated also provide the RAID10 type functionlity...then why we should use another layer(HW RAID) for redudency ??
01:54 punit_ JoeJulian: Yes right...i want the best performance of my underline SSD....i have no issue to use HW RAID or WithOut HW RAID...would you midn to suggest me the best way..
01:54 ira joined #gluster
01:55 JoeJulian I would only use hw raid if my tests showed that I would get better performance and my math showed that I could still reach SLA uptimes.
01:56 JoeJulian but I can't recommend what's best for your use case. You have to know what it is and design a system to meet your criteria.
01:56 punit_ JoeJulian:which means it's better to use the HW RAID to maintain SLA uptimes and performance..
01:58 T3 joined #gluster
01:58 JoeJulian ok
01:59 punit_ JoeJulian: If HW RAID then which RAID level will give me best performace and best failure rate with glusterfs...should i use RAID 5 or RAID 60 ??
02:00 JoeJulian @lucky iozone
02:00 glusterbot JoeJulian: http://www.iozone.org/
02:00 JoeJulian @uptime calculation
02:00 glusterbot JoeJulian: (uptime [<network>]) -- Returns the time duration since the connection was established.
02:00 JoeJulian @factoid uptime calculation
02:00 JoeJulian @factoids search calculation
02:00 glusterbot JoeJulian: 'reliability calculations' and 'sla calculations'
02:00 JoeJulian @sla calculations
02:00 glusterbot JoeJulian: Calculate your system reliability and availability using the calculations found at http://www.eventhelix.com/realtimemantra/faulthandling/system_reliability_availability.htm . Establish replica counts to provide the parallel systems to meet your SLA requirements.
02:01 JoeJulian punit_: that iozone link for testing your hardware performance
02:01 JoeJulian and the last link for calculating your SLA.
02:02 punit_ JoeJulian: thanks...
02:02 punit_ JoeJulian: last question how much RAM should be sufficient on each server to run the glusterfs
02:09 JoeJulian Depends on how many bricks per server, how much cache, etc. I've had 16 bricks on 8 gig though, so it's doable.
02:13 punit_ JoeJulian: if i will go with HW RAID i will create two bricks (2 RAID6 Virtual disk with each VD has 12 SSD disk)...should 32 GB RAM will be sufficient ??
02:14 harish joined #gluster
02:15 Gues_____ joined #gluster
02:25 dusmant joined #gluster
02:26 DV joined #gluster
02:27 topshare joined #gluster
02:28 haomaiwa_ joined #gluster
02:40 nangthang joined #gluster
02:41 topshare joined #gluster
03:01 ppai joined #gluster
03:04 Bhaskarakiran joined #gluster
03:05 anil_ joined #gluster
03:17 harish joined #gluster
03:23 soumya joined #gluster
03:35 shubhendu joined #gluster
03:39 topshare joined #gluster
03:40 itisravi joined #gluster
03:48 topshare joined #gluster
03:48 bharata-rao joined #gluster
03:49 rjoseph joined #gluster
03:58 RameshN joined #gluster
04:01 atinmu joined #gluster
04:03 nbalacha joined #gluster
04:11 lyang0 joined #gluster
04:16 hagarth joined #gluster
04:19 schandra joined #gluster
04:28 kdhananjay joined #gluster
04:47 anoopcs joined #gluster
04:50 rafi joined #gluster
04:51 nishanth joined #gluster
04:53 jiffin joined #gluster
04:54 lalatenduM joined #gluster
04:54 meghanam joined #gluster
04:55 ndarshan joined #gluster
05:00 ppai_ joined #gluster
05:01 Bhaskarakiran joined #gluster
05:02 hflai_ joined #gluster
05:03 badone_ joined #gluster
05:06 Bhaskarakiran joined #gluster
05:09 badone__ joined #gluster
05:10 ppp joined #gluster
05:10 spandit joined #gluster
05:11 vimal joined #gluster
05:22 glusterbot News from newglusterbugs: [Bug 1200271] Upcall: xlator options for Upcall xlator <https://bugzilla.redhat.com/show_bug.cgi?id=1200271>
05:22 kumar joined #gluster
05:28 nangthang joined #gluster
05:30 dusmant joined #gluster
05:37 gem joined #gluster
05:38 kanagaraj joined #gluster
05:40 eryc joined #gluster
05:40 eryc joined #gluster
05:40 kshlm joined #gluster
05:40 Manikandan joined #gluster
05:40 ashiq joined #gluster
05:45 spandit joined #gluster
05:47 ramteid joined #gluster
05:48 atalur joined #gluster
05:52 badone joined #gluster
05:53 overclk joined #gluster
05:58 nshaikh joined #gluster
05:58 jiffin joined #gluster
06:01 nbalacha joined #gluster
06:03 hchiramm joined #gluster
06:06 jtux joined #gluster
06:07 soumya joined #gluster
06:12 jtux joined #gluster
06:16 RameshN joined #gluster
06:17 harish_ joined #gluster
06:17 dusmant joined #gluster
06:22 glusterbot News from newglusterbugs: [Bug 1205037] [SNAPSHOT]: "man gluster" needs modification for few snapshot commands <https://bugzilla.redhat.com/show_bug.cgi?id=1205037>
06:28 pelox joined #gluster
06:36 deepakcs joined #gluster
06:38 itisravi_ joined #gluster
06:38 RameshN joined #gluster
07:01 suliba joined #gluster
07:02 mbukatov joined #gluster
07:02 itisravi_ joined #gluster
07:04 aravindavk joined #gluster
07:05 kdhananjay joined #gluster
07:14 kumar joined #gluster
07:18 itisravi joined #gluster
07:18 karnan joined #gluster
07:24 smohan joined #gluster
07:24 nangthang joined #gluster
07:26 hchiramm joined #gluster
07:29 dusmant joined #gluster
07:31 maveric_amitc_ joined #gluster
07:36 ghenry joined #gluster
07:36 ghenry joined #gluster
07:38 navid__ joined #gluster
07:38 Philambdo joined #gluster
07:48 bala joined #gluster
07:49 ppai_ joined #gluster
07:51 o5k_ joined #gluster
07:52 glusterbot News from newglusterbugs: [Bug 1205057] tools/glusterfind: Provide API for testing status of session creation <https://bugzilla.redhat.com/show_bug.cgi?id=1205057>
08:00 nshaikh joined #gluster
08:00 gildub joined #gluster
08:05 sripathi joined #gluster
08:20 suliba joined #gluster
08:23 karnan_ joined #gluster
08:35 deniszh joined #gluster
08:41 baoboa joined #gluster
08:45 Pupeno joined #gluster
08:47 dusmant joined #gluster
08:47 o5k joined #gluster
08:48 ndarshan joined #gluster
08:50 fsimonce joined #gluster
08:51 harish_ joined #gluster
08:53 shubhendu joined #gluster
08:56 bala joined #gluster
08:59 magbal joined #gluster
09:00 nishanth joined #gluster
09:03 liquidat joined #gluster
09:06 ppai_ joined #gluster
09:15 overclk joined #gluster
09:15 doekia joined #gluster
09:16 sripathi joined #gluster
09:18 Bhaskarakiran joined #gluster
09:21 ctria joined #gluster
09:23 Norky joined #gluster
09:24 ninkotech joined #gluster
09:24 ninkotech_ joined #gluster
09:26 atalur joined #gluster
09:27 Bhaskarakiran joined #gluster
09:36 ktosiek joined #gluster
09:48 T0aD joined #gluster
09:49 shubhendu joined #gluster
09:51 ndarshan joined #gluster
09:51 bala joined #gluster
09:52 dusmant joined #gluster
09:54 Dw_Sn joined #gluster
09:55 T3 joined #gluster
09:59 nshaikh joined #gluster
10:02 RameshN joined #gluster
10:09 ira joined #gluster
10:09 kovshenin joined #gluster
10:10 nshaikh joined #gluster
10:13 anil_ joined #gluster
10:23 glusterbot News from newglusterbugs: [Bug 1205128] Disperse volume: "df -h" on a cifs mount throws IO error and no file systems processed message <https://bugzilla.redhat.com/show_bug.cgi?id=1205128>
10:38 nishanth joined #gluster
10:40 kkeithley1 joined #gluster
10:43 ppai_ joined #gluster
10:46 Bhaskarakiran joined #gluster
10:52 bene2 joined #gluster
10:53 kkeithley_ GGluster Bug Triage Meeting in 5 minutes in #gluster-meeting
11:00 firemanxbr joined #gluster
11:02 kkeithley_ My mistake . Gluster Bug Triage meeting in 1 hour in #gluster-meeting
11:19 o5k_ joined #gluster
11:38 suliba joined #gluster
11:38 [Enrico] joined #gluster
11:44 T3 joined #gluster
11:45 ws2k3 joined #gluster
11:46 edwardm61 joined #gluster
11:52 kkeithley_ Gluster Bug Triage Meeting in 10 minutes in #gluster-meeting
12:02 RameshN joined #gluster
12:03 kkeithley_ Gluster Bug Triage Meeting now in #gluster-meeting
12:09 pdrakeweb joined #gluster
12:14 kovsheni_ joined #gluster
12:16 nangthang joined #gluster
12:23 firemanxbr joined #gluster
12:26 anoopcs joined #gluster
12:30 firemanxbr joined #gluster
12:41 suliba joined #gluster
12:43 stickyboy I'm looking for advice on 10GbE fiber hardware... we're currently on 10GBASE-T and looking to move to something optical.
12:46 suliba joined #gluster
12:55 plarsen joined #gluster
12:59 T3 joined #gluster
13:01 smohan joined #gluster
13:01 julim joined #gluster
13:01 LebedevRI joined #gluster
13:07 suliba joined #gluster
13:08 Gill joined #gluster
13:13 diegows joined #gluster
13:17 dgandhi joined #gluster
13:17 dusmant joined #gluster
13:18 dgandhi joined #gluster
13:20 calisto joined #gluster
13:20 calisto joined #gluster
13:20 shubhendu joined #gluster
13:20 dgandhi joined #gluster
13:23 shaunm joined #gluster
13:23 dgandhi joined #gluster
13:23 suliba joined #gluster
13:23 dgandhi joined #gluster
13:24 dgandhi joined #gluster
13:25 dgandhi joined #gluster
13:26 dgandhi joined #gluster
13:26 coredump joined #gluster
13:26 dgandhi joined #gluster
13:28 dgandhi joined #gluster
13:28 theron joined #gluster
13:29 dgandhi joined #gluster
13:30 dgandhi joined #gluster
13:31 dgandhi joined #gluster
13:33 dgandhi joined #gluster
13:36 georgeh-LT2 joined #gluster
13:36 halfinhalfout joined #gluster
13:37 jiffin joined #gluster
13:41 halfinhalfout can someone advise me on manual recovery for dirs/files that failed self-heal?
13:42 suliba joined #gluster
13:42 jiffin1 joined #gluster
13:45 plarsen joined #gluster
13:47 stickyboy halfinhalfout: Have a look at splitmount which mounts both bricks in a replica volume so you can compare files: https://github.com/joejulian/glusterfs-splitbrain
13:47 stickyboy And also these docs from Gluster: https://github.com/gluster/glusterfs/blob/master/doc/features/heal-info-and-split-brain-resolution.md
13:50 halfinhalfout stickyboy: thanks. I will. "gluster volume heal <volname> info split-brain" shows 0 entries.
13:50 halfinhalfout but you think treating it like a split-brain situation is the way to go?
13:57 Gill left #gluster
13:58 halfinhalfout should glusterfs-splitbrain work on whole directories & their contents? Eg., some files missing from a directory on 1 brick in a replicate pair. use glusterfs-splitbrain to remove the entire directory from the brick w/ missing files, then initiate a volume heal => success?
13:59 Slashman joined #gluster
13:59 bala joined #gluster
14:00 adamaN joined #gluster
14:05 saltlake joined #gluster
14:05 nishanth joined #gluster
14:06 julim joined #gluster
14:09 bene2 joined #gluster
14:10 hamiller joined #gluster
14:16 roost joined #gluster
14:17 dusmant joined #gluster
14:19 gnudna joined #gluster
14:22 gnudna guys when setting the gluster mount point in a replicated env can i use either hostname in parallel?
14:22 gnudna server1 and server2 have half the clients connect to server1 and other half to server2
14:22 hamiller I think the answer is yes, can you clarify the question?
14:23 hamiller native mount, nfs, cifs all will work that way
14:23 hamiller but don't mix cifs and nfs
14:23 gnudna i was refering to native using glusterfs
14:23 hamiller on the same files at the same time  :)
14:24 gnudna on the same share
14:24 hamiller that should be fine
14:24 gnudna volume
14:24 gnudna the files themselves are kvm images
14:24 gnudna no chance really of 2 hosts using the same file
14:24 gnudna unless i botch it
14:24 kkeithley_ native mounts can mount from any participating gluster server.  The handshake will tell the clients about all the servers. I/O will usually go to the faster/fastest server
14:25 hamiller right, clients are smart
14:25 gnudna well in my testing i have found that when 1 server goes down i can no longer access through ssh the vm's
14:26 gnudna which in my setup defeats the purpose that i was aiming for
14:26 glusterbot News from resolvedglusterbugs: [Bug 976750] Disabling NFS causes E level errors in nfs.log. <https://bugzilla.redhat.com/show_bug.cgi?id=976750>
14:26 gnudna thanks for the information guys
14:27 kkeithley_ If one server goes down there's a 42 second timeout. After the timeout though everything should resume as if nothing happened. If that's not the case for you then you should open a ,,(bug)
14:27 glusterbot I do not know about 'bug', but I do know about these similar topics: 'fileabug'
14:27 kkeithley_ then you should ,,(fileabug)
14:27 glusterbot Please file a bug at http://goo.gl/UUuCq
14:30 jmarley joined #gluster
14:33 Misuzu joined #gluster
14:36 gnudna kkeithley_ in my case that is not the case but in my lab env my systems are somewhat under powered for what i am attempting
14:36 quique the self heal daemon is not running
14:36 gnudna i am running gluster and kvm on the same machine
14:36 quique i've tried restarting glusterd
14:36 quique but that doesn't seam to solve it
14:36 gnudna i have 2 of these machines hence the replicated setup
14:37 RicardoSSP joined #gluster
14:38 bennyturns joined #gluster
14:40 halfinhalfout1 joined #gluster
14:41 ctria joined #gluster
14:43 coredump is there any benchmark tool that will give me a reliable IOPS value?
14:48 T3 joined #gluster
14:50 JustinClift fio?
14:51 suliba_ joined #gluster
14:51 fattaneh joined #gluster
14:54 getup joined #gluster
14:55 jiffin joined #gluster
14:56 jobewan joined #gluster
15:02 nbalacha joined #gluster
15:04 jiffin1 joined #gluster
15:08 calisto_ joined #gluster
15:11 jiffin joined #gluster
15:11 sankarshan joined #gluster
15:25 RameshN joined #gluster
15:26 jiffin1 joined #gluster
15:49 shubhendu joined #gluster
15:54 DV joined #gluster
15:54 hamiller joined #gluster
15:56 bala joined #gluster
16:10 kumar joined #gluster
16:11 firemanxbr joined #gluster
16:17 jiffin joined #gluster
16:28 fattaneh1 joined #gluster
16:28 fattaneh1 left #gluster
16:28 RameshN joined #gluster
16:29 jiffin joined #gluster
16:30 bene3 joined #gluster
16:33 kshlm joined #gluster
16:35 o5k__ joined #gluster
16:43 soumya joined #gluster
16:43 RameshN joined #gluster
16:49 rwheeler joined #gluster
16:53 jiffin joined #gluster
16:54 John___ joined #gluster
16:55 nishanth joined #gluster
16:56 John___ Hello, I run into a problem installing gluster 3.5 unto ubuntu 14.04.
16:56 John___ sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.5
16:56 John___ Cannot add PPA: 'ppa:semiosis/ubuntu-glusterfs-3.5'.
16:56 Rapture joined #gluster
16:57 John___ This code was part of an ansible script. So I suspect the PPA string is the same as last week when it worked.
16:57 John___ Anyone aware of a change here.
16:57 JoeJulian @ppa
16:57 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
16:58 JoeJulian They were moved to allow for more collaboration.
17:00 John___ Thank you. I understand.
17:00 semiosis John___: you should set up your own APT repo, or at least your own PPA, if you're going to automate deployment of packages
17:01 semiosis automated deployment shouldn't have external dependencies, generally
17:04 ildefonso joined #gluster
17:11 lalatenduM joined #gluster
17:14 jiffin joined #gluster
17:16 calisto joined #gluster
17:21 halfinhalfout1 I've got a replicate volume that becomes unavailable to clients when performing a self-heal.
17:22 halfinhalfout1 using gluster 3.5.3 from official Ubuntu PPA
17:23 halfinhalfout1 the volume is not large, only 446GB of 600GB used, but it has millions of directories & small files
17:23 bene2 joined #gluster
17:25 halfinhalfout1 think it has to do with the way the self-heal process traverses directories in 3.5, and this is sposed to be fixed in 3.6.
17:25 halfinhalfout1 I don't think I can move to 3.6 right now, though.
17:25 halfinhalfout1 does anyone know a way to throttle the self-heal process to prevent it from making the volume unavailable?
17:26 JoeJulian cgroups?
17:28 halfinhalfout1 I can think of a few possibilities: 1) 'ionice 7' the glustershd process could help 2) set 'background-self-heal-count' on the volume to … something not the default. :-)
17:28 halfinhalfout1 background-self-heal-count is mentioned here: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
17:29 JoeJulian It's also mentioned in "gluster volume set help"
17:29 JoeJulian That page should be retired.
17:29 John___ semiosis: Thanks I will investigate.
17:30 halfinhalfout1 cool. I'm not sure exactly what it would do, though. "This specifies the number of self-heals that can be  performed in background without blocking the fop"
17:31 hamiller joined #gluster
17:32 JoeJulian Ah, now that's a different question.
17:33 JoeJulian There's a self-heal daemon that processes the heals, but each client can also heal files if the file it's accessing needs healed. Up to background-self-heal count can be processed by the client in the background. Once that queue is full, the next lookup will block until the file is healed.
17:36 halfinhalfout1 ok. would that background-self-heal-count be per-client limit or a total for all clients accessing the volume?
17:41 halfinhalfout1 trying to figure out if the background-self-heal-count would actually be a useful knob to twiddle to reduce disk I/O of self-heal process.
17:41 edong23 joined #gluster
17:42 JoeJulian per client
17:43 JoeJulian It might, but at the cost of blocking your application for however long it takes to heal a file.
17:43 JoeJulian Assuming small files, that may be acceptable.
17:50 hchiramm_ joined #gluster
17:54 gnudna what are considered small files?
17:54 gnudna is a 4G img file considered small ;)
17:56 plarsen joined #gluster
18:00 JoeJulian no
18:01 JoeJulian imho, a small file is one in which the tcp overhead is significant.
18:02 gnudna fair enough
18:02 gnudna and makes sense
18:02 gnudna though i assume you meant is not significant
18:03 kkeithley_ _is_ significant.
18:03 kkeithley_ Writing 1000s of 40-byte files means the payload is smaller than the TCP/IP overhead
18:05 gnudna i was looking it as how long it takes to transfer something over
18:05 stickyboy Ooh, are you guys talking about interconnects? I'm debating moving from copper-based 10GbE (10GBASE-T)...
18:10 fattaneh1 joined #gluster
18:13 CyrilPeponnet joined #gluster
18:24 kanagaraj joined #gluster
18:26 gnudna i wonder if using a cross over cable can give an advantage on a replicated setup
18:48 lpabon joined #gluster
18:57 dbruhn joined #gluster
19:01 o5k joined #gluster
19:02 stickyboy Hmmm, debating moving to Infiniband... 10GbE latency is causing problems.
19:03 stickyboy `ls` is slow... rebalance and data migrations are slow as molasses on 10GbE... well, if you have small files / lots of directories.
19:04 fattaneh1 left #gluster
19:07 calisto_ joined #gluster
19:37 dbruhn stickyboy, unless you are running optical IB it won't make a difference, and even then it won't be as big of an improvement as you think it will be over 10gb
19:37 dbruhn I have both 20gb, and 40gb IB and with huge directories stuff like that is still slow.
19:38 dbruhn what version are you running?
19:38 ppp joined #gluster
19:41 Pupeno_ joined #gluster
20:02 halfinhalfout1 anybody know what python requirements are for splitmount?
20:02 halfinhalfout1 can't get it to run on ubuntu 12.04 or 14.04 w/ python .7.3 or 2.7.6 respectively
20:03 halfinhalfout1 oops. not ".7.3" … "2.7.3"
20:03 stickyboy dbruhn: We're on Gluster 3.5.3 with 10GBE over twisted pair.
20:04 stickyboy dbruhn: To be honest our performance is pretty good. We are learning which of our users' apps write millions of files/folders and encourage them to write to node-local scratch storage.
20:05 stickyboy dbruhn: Right now I'm rebalancing after a brick replacement and I can just see glustershd getting stuck in a user's directory for days... wandering an endless hierarchy of folders, LOL.
20:06 dbruhn I have actually recently retired my systems that were on IB.
20:06 dbruhn I was stuck on 3.3.x on those systems
20:06 stickyboy Eek
20:06 stickyboy I'm gonna move to 3.6.x soon. Waiting for a few point releases. LOL.
20:07 dbruhn One of the reasons I was stuck was because RDMA support was always kind of spotty in upgrades, if you are running IPoIB you're far better off with upgrades.
20:07 stickyboy It's good to hear people are moving away from IB. 10GbE is easy as hell.
20:08 dbruhn I still have all my IB equipment here, the systems I was running it all for were sold with my last company. So I was able to decommission it after the data was transitioned to their systems.
20:09 dbruhn I have a mix of 10GB, 20GB IB, and 40GB IB stuff laying around I use for testing stuff. My couple of current systems I am running gluster for are actually just running bonded/teamed 1GB right now.
20:09 stickyboy To be honest I don't need low latency per se. It's just a usability thing. But if I get faster disks, tune the performance translators, eek more performance our of TCP/IP tuneables, stop users from running apps which create millions of files/folders on network storage, etc... I'll probably get more responsive.
20:09 JoeJulian halfinhalfout1: What kind of errors are you seeing? I use 2.7.8 but I'm not against making it work on 2.7.3 if that's broken.
20:10 dbruhn The reason Gluster historically was so slow with directory ops like ls is because it would cause a stat on each file as it went through, and if you are using replication I believe that also triggered a heal op to check the file
20:11 dbruhn Someone else can correct me if I am wrong, but I thought in 3.6 or 3.7 this was changing though.
20:11 rotbeard joined #gluster
20:11 JoeJulian Well, ls does a stat to determine what kind of file it is so it can decorate the filename.
20:11 JoeJulian You can do an ls that doesn't do that.
20:11 stickyboy JoeJulian: LOL decorate.
20:12 JoeJulian ?
20:12 dbruhn JoeJulian, for sure *thumbs up* I am just giving the dumb version.
20:13 JoeJulian That's what it does. By color or character by default in most distros.
20:13 * JoeJulian isn't even trying to be funny.
20:13 stickyboy JoeJulian: Yeah. I know you're not. :P
20:14 stickyboy JoeJulian: I just did a `time \ls src/blah` and it's instantaneous; `time ls src/blah` took 17 seconds but was so pretty!
20:17 halfinhalfout1 JoeJulian: https://gist.github.com/halfinhalfout/b0c7ba443c1edbf0bf5a
20:20 JoeJulian halfinhalfout1: Ah. Oops. You need to have the cli installed and, as of this moment, needs to be run on a member of the trusted pool. The latter I can change, but prior to 3.6.2 I have to retrieve the volfile using the cli.
20:21 JoeJulian Personally, I just run it on a server.
20:21 halfinhalfout1 ah, ok
20:21 halfinhalfout1 thx. I'll try that
20:21 JoeJulian I'll update the readme
20:23 halfinhalfout1 thx. that's appreciated
20:24 Prilly joined #gluster
20:25 JoeJulian Oh, you don't have to be on a trusted server, just have the cli installed.
20:26 JoeJulian I actually did think that far ahead. :D
20:28 Philambdo joined #gluster
20:29 stickyboy JoeJulian: Did you move to 3.6 yet? You're my canary (or whatever).
20:29 JoeJulian Only at home.
20:29 JoeJulian I'd wait. 3.6.3 has a lot of critical bug fixes.
20:30 adamaN Hi, i have an error : No space left on device. When i have 44TB left how can i fix this?
20:31 gnudna im using 3.6.2 at home
20:32 gnudna after JoeJulian comment i will be needing 3.6.3
20:32 o5k__ joined #gluster
20:32 Liquid-- joined #gluster
20:32 halfinhalfout1 JoeJulian: splitbrain is working for me now, when I've got the cli installed
20:32 halfinhalfout1 thx much
20:32 JoeJulian You're welcome.
20:33 JoeJulian adamaN: I would imagine one (or a replica) of your bricks is full.
20:34 JoeJulian I would probably try to rebalance.
20:34 adamaN yes i have a few full one but have more that are 30% or less
20:34 JoeJulian but you're growing files that reside on the full ones.
20:34 adamaN i am rebalancing since yesterday
20:34 JoeJulian ok
20:35 adamaN but i have scripts that are writing new files every 30mns
20:35 adamaN and the error keeps showing
20:35 JoeJulian new files shouldn't be created on full bricks. <sigh>
20:35 o5k_ joined #gluster
20:35 JoeJulian shouldn't is the key word.
20:36 JoeJulian Can you please file a bug report
20:36 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:36 adamaN newbie question:isn't gluster suppose to know it is full?
20:36 JoeJulian yep
20:37 adamaN i am using glusterfs 3.3.1 built on Apr  2 2013 15:09:48
20:37 adamaN and it is old
20:37 JoeJulian It should go to the brick that the filename hashes to, see that it exceeds the threshold, then go to the next dht subvolume and try there.
20:37 adamaN maybe bug was fixed?
20:37 JoeJulian Oh, well then all bets are off.
20:37 JoeJulian Still should, but who knows.
20:38 JoeJulian Oh, and rebalance will likely fail to complete the first time on 3.3.1.
20:39 adamaN so should i stop the rebalance
20:39 adamaN ?
20:39 adamaN in two nodes it is in progress
20:40 JoeJulian Just keep an eye on it. If the log seems to stall it's probably borked.
20:40 gnudna adamaN are all the bricks the same size?
20:40 adamaN nope different size
20:41 gnudna i thought bricks had to be the same size
20:41 adamaN how would i know it is stalled?
20:41 JoeJulian intuition
20:42 JoeJulian gnudna: they don't *have* to be the same size, you just risk having adamaN's problem if they're not.
20:42 gnudna well depending on the setup it will very likely happen
20:42 gnudna using replication as an example
20:43 gnudna in theory even distributed at some point in time
20:43 JoeJulian you do, absolutely, want replicas to be the same size.
20:43 JoeJulian I have no idea what the dht algorithm would do if one replica was full and the other not.
20:44 gnudna ok but in the case of distrib 1file -> 3 nodes
20:44 gnudna you would assume that a 1/3 ratio on all the bricks
20:44 JoeJulian @dht
20:44 glusterbot JoeJulian: I do not know about 'dht', but I do know about these similar topics: 'dd'
20:44 JoeJulian @meh
20:44 glusterbot JoeJulian: I'm not happy about it either
20:44 JoeJulian @lucky dht misses are expensive
20:44 glusterbot JoeJulian: http://joejulian.name/blog/dht-misses-are-expensive/
20:44 JoeJulian Check out that article.
20:47 adamaN Is there a fix to my problem?
20:48 papamoose joined #gluster
20:48 gnudna expand you disk size if possible ;)
20:49 adamaN i can't. But i've been adding new nodes
20:49 adamaN which are not full (10%)
20:50 adamaN how can i make gluster write on those?
20:50 gnudna JoeJulian might be of more help
20:50 gnudna he seems to know what he is talking about ;)
20:50 JoeJulian I just did... Did you see how dht works?
20:54 adamaN ok saw the page
20:54 adamaN that script need to be in the volume?
20:55 JoeJulian So each file is stored whole. The hash is used to determine which dht subvolume it should be on.
20:55 JoeJulian So 1 file is not spread across multiple subvolumes. That would be ,,(stripe).
20:55 glusterbot Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
20:55 JoeJulian (spoiler: probably not)
20:56 gnudna JoeJulian i guess i mis-understood
20:56 gnudna if i use distributed vs replica
20:56 JoeJulian usually both
20:56 adamaN i am using distributed
20:57 JoeJulian replica to achieve HA, distributed to expand capacity.
20:57 gnudna ok make sense
20:58 gnudna in my case i just assumed replica = mirror
20:58 JoeJulian more or less, but I try not to let people compare clustered systems to raid. They don't equate.
20:58 adamaN Thanks, i am going to do more reading on it and comeback with more question
20:59 gnudna exactly what i was thinking
20:59 gnudna i thought replica = raid1
20:59 JoeJulian cool, adamaN. I'll be around (more or less).
21:02 gnudna later guys and as usual good times ;)
21:02 gnudna left #gluster
21:08 Prilly joined #gluster
21:12 bennyturns joined #gluster
21:13 badone joined #gluster
21:25 Liquid-- joined #gluster
21:45 suliba joined #gluster
21:46 Pupeno joined #gluster
21:49 harmw joined #gluster
21:49 Arrfab joined #gluster
21:53 kiwnix joined #gluster
21:55 suliba joined #gluster
22:00 tessier joined #gluster
22:00 halfinhalfout1 left #gluster
22:06 T3 joined #gluster
22:12 hchiramm_ joined #gluster
22:16 kovshenin joined #gluster
22:30 kovshenin joined #gluster
22:34 XpineX joined #gluster
22:34 tessier joined #gluster
22:35 kovshenin joined #gluster
22:40 aea joined #gluster
22:43 aea Would GlusterFS be suitable for a very very small deployment, i.e. <1GB of data with frequent reads and sparse writes? I have a setup using Ceph right now and it consumes about 4-6x more memory then I have data stored. I'm going to test anyway but would appreciate a "nope, turn around, use x instead" recommendation if it's relevant.
23:09 T3 joined #gluster
23:29 plarsen joined #gluster
23:43 necrogami joined #gluster
23:57 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary