Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 sputnik13 anybody have experience using multi terabyte files off of a gluster volume?
00:05 sputnik13 if not multi terabyte at least hundreds of gigs per file
00:05 sputnik13 we seem to be getting frequent corruptions, we're running on 8 nodes with 2 replica distribute
00:06 JoeJulian sputnik13: yep, been doing that for a while now. No problems here.
00:07 sputnik13 JoeJulian: what kind of files?
00:09 JoeJulian VM images and innodb tables
00:10 sputnik13 JoeJulian: static images or active images?
00:10 JoeJulian active
00:10 sputnik13 JoeJulian: what version of gluster and what OS?  replicate?
00:10 JoeJulian static VMs would kind-of suck. ;)
00:11 sputnik13 JoeJulian: yeah, I was making sure you're not using it for glance images :-)
00:11 JoeJulian Different cases, replica 2 and replica 3, from versions 2.0 -> 3.4.4
00:13 sputnik13 centos? ubuntu?
00:13 JoeJulian both
00:14 JoeJulian Have you changed any of the volume settings?
00:15 sputnik13 JoeJulian: yeah, we're using http://pastebin.com/qau016qM
00:15 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
00:16 sputnik13 I know some of those options don't work, just haven't taken the time to prune them
00:17 JoeJulian mmmm... right... you're using stripe.
00:17 sputnik13 uhh, for this volume no, I pasted one of my setup scripts
00:18 sputnik13 for the volume I'm having issues with right now it's just a replica 2 distribute, no stripe
00:18 JoeJulian Well that makes me feel a little better (sort of)
00:18 JoeJulian Nothing in the client or brick logs of Error or higher?
00:20 JoeJulian Do you use smart raid controllers?
00:21 sputnik13 JoeJulian: one sec
00:23 sputnik13 we have raid controllers yes, battery backed raid 6
00:23 sputnik13 I think for this particular corruption this error message might be an indication of what's happening
00:23 sputnik13 XFS: possible memory allocation deadlock in kmem_alloc (mode:0x250)
00:24 sputnik13 I'm seeing this on the 2 nodes that has the corrupted file
00:25 JoeJulian ew, yeah, that doesn't sound good.
00:26 _Bryan_ joined #gluster
00:28 sputnik13 we have 24GB of RAM on these and it looks like the majority is being used by glusterfsd
00:28 sputnik13 maybe it was running out of memory
00:44 bala joined #gluster
00:46 meghanam_ joined #gluster
00:47 meghanam joined #gluster
00:49 sputnik13 JoeJulian: how much ram do you have on your gluster nodes?
00:50 JoeJulian 32G on the smaller ones. Multiple bricks per, though, and I tuned down the cache.
00:50 JoeJulian Also, my servers are not also clients, not sure if that's different from you.
00:52 sputnik13 JoeJulian: ours are also not clients, but I'm using 4G cache-size
00:52 sputnik13 for each volume...
00:52 sputnik13 perhaps that's the problem
00:52 sputnik13 :-)
00:53 sputnik13 I'm guessing that there is a glusterfsd for each volume, and each glusterfsd is using its own 4GB cache (given I'm specifying 4G on all my volumes)
00:53 sputnik13 am I barking up the right tree here?
00:54 JoeJulian glusterfsd for each brick, yes. And I don't know where that number gets used. It seems completely arbitrary sometimes. I just kept reducing it until I wasn't maxing out my ram.
00:55 semiosis joined #gluster
01:46 haomaiwang joined #gluster
01:47 XpineX_ joined #gluster
01:49 bala joined #gluster
01:49 harish joined #gluster
01:53 msmith joined #gluster
01:56 haomaiwa_ joined #gluster
02:21 jwillis0720 joined #gluster
02:45 calisto joined #gluster
03:03 ninkotech joined #gluster
03:09 DV joined #gluster
03:11 ninkotech joined #gluster
03:22 R0ok_|kejani joined #gluster
03:25 hagarth joined #gluster
03:27 haomaiw__ joined #gluster
03:35 kshlm joined #gluster
03:41 XpineX__ joined #gluster
03:47 rjoseph joined #gluster
03:47 kumar joined #gluster
03:50 ira joined #gluster
03:51 kshlm joined #gluster
03:57 kanagaraj joined #gluster
04:05 shubhendu joined #gluster
04:05 RameshN joined #gluster
04:06 saurabh joined #gluster
04:16 prasanth_ joined #gluster
04:22 nbalachandran joined #gluster
04:23 atinmu joined #gluster
04:29 hflai joined #gluster
04:37 rafi1 joined #gluster
04:37 Rafi_kc joined #gluster
04:39 sahina joined #gluster
04:39 anoopcs joined #gluster
04:49 ppai joined #gluster
04:51 meghanam_ joined #gluster
04:52 meghanam joined #gluster
04:55 ndarshan joined #gluster
04:58 R0ok_ joined #gluster
05:04 lalatenduM joined #gluster
05:08 anoopcs1 joined #gluster
05:16 soumya joined #gluster
05:19 saurabh joined #gluster
05:19 ira joined #gluster
05:26 Humble joined #gluster
05:27 nishanth joined #gluster
05:29 deepakcs joined #gluster
05:32 jiffin joined #gluster
05:38 msmith joined #gluster
05:43 atinmu joined #gluster
05:45 nishanth joined #gluster
05:47 soumya joined #gluster
05:49 prasanth_ joined #gluster
05:50 atalur joined #gluster
05:53 ramteid joined #gluster
05:57 prasanth_ joined #gluster
06:05 soumya joined #gluster
06:10 ricky-ti1 joined #gluster
06:16 overclk joined #gluster
06:19 mkasa joined #gluster
06:19 nishanth joined #gluster
06:26 anoopcs joined #gluster
06:27 anoopcs joined #gluster
06:29 ira joined #gluster
06:30 ppai joined #gluster
06:32 R0ok_ joined #gluster
06:40 kshlm joined #gluster
06:47 bharata-rao joined #gluster
06:54 ekuric joined #gluster
06:56 atinmu joined #gluster
06:58 ppai joined #gluster
07:01 ctria joined #gluster
07:03 overclk joined #gluster
07:07 karnan joined #gluster
07:17 andreask joined #gluster
07:25 rgustafs joined #gluster
07:28 Slydder joined #gluster
07:32 Fen2 joined #gluster
07:32 overclk msvbhat, ping
07:32 glusterbot overclk: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
07:33 dusmant joined #gluster
07:43 nishanth joined #gluster
07:45 bala joined #gluster
07:45 xrubbit joined #gluster
07:49 Humble joined #gluster
07:54 andreask joined #gluster
07:58 dguettes joined #gluster
08:05 ricky-ticky joined #gluster
08:23 haomaiwa_ joined #gluster
08:27 ricky-ti1 joined #gluster
08:29 xrubbit joined #gluster
08:40 glusterbot New news from newglusterbugs: [Bug 1157985] [USS]: deletion and creation of snapshots with same name causes problems <https://bugzilla.redhat.co​m/show_bug.cgi?id=1157985>
08:45 kaushal_ joined #gluster
08:45 xrubbit joined #gluster
08:57 xrubbit joined #gluster
09:06 xrubbit joined #gluster
09:07 atinmu joined #gluster
09:08 harish joined #gluster
09:12 hagarth joined #gluster
09:13 Slashman joined #gluster
09:18 xrubbit joined #gluster
09:19 vikumar joined #gluster
09:20 hagarth1 joined #gluster
09:22 deepakcs joined #gluster
09:33 georgeh joined #gluster
09:36 cjanbanan joined #gluster
09:40 glusterbot New news from newglusterbugs: [Bug 1158008] Quota utilization not correctly reported for dispersed volumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158008>
09:44 MrAbaddon joined #gluster
09:50 rgustafs joined #gluster
09:57 hagarth overclk: ping, anything to add from changelog/geo-rep in 3.6 release notes? maybe history api?
10:00 overclk hagarth, yep, that along side geo-rep improvements.
10:01 hagarth overclk: would it be possible for you to add them to the online draft?
10:02 kshlm joined #gluster
10:02 overclk hagarth, sure. link please? I"ll add it to it then.
10:03 hagarth overclk: http://goo.gl/zY6fve
10:13 overclk hagarth, done editing.
10:13 atinmu joined #gluster
10:15 hagarth overclk: cool, thanks. have merged your changes.
10:16 overclk hagarth, I have submitted the changes. I can view them in the doc too.
10:16 hagarth overclk: I got a notification to merge to my doc from a fork .. so merged that.
10:18 overclk hagarth, ok, cool. Additionally, we'd need to mention about changelog barriering in the "barrier" section.
10:19 overclk hagarth, I'll do that once the basic barrier doc is included.
10:19 hagarth overclk: ok cool
10:25 spoofedpacket joined #gluster
10:33 LebedevRI joined #gluster
10:35 spoofedpacket We have a gluster peer that's been offline for some time (around 12 days), we're planning on turning it back on, are there any potential issues to be aware of ? (3.4.3, four node cluster, 3 of the nodes are currently online)
10:49 nishanth joined #gluster
10:54 xrubbit joined #gluster
10:54 badone joined #gluster
11:00 lpabon joined #gluster
11:01 haomaiwang joined #gluster
11:03 xrubbit joined #gluster
11:04 ppai joined #gluster
11:09 calisto joined #gluster
11:24 bene2 joined #gluster
11:29 xrubbit joined #gluster
11:29 bala joined #gluster
11:30 calum_ joined #gluster
11:30 nishanth joined #gluster
11:30 mojibake joined #gluster
11:36 hchiramm joined #gluster
11:42 glusterbot New news from resolvedglusterbugs: [Bug 1150244] glusterfsd hangs on IO when underlying ext4 filesystem corrupts an xattr <https://bugzilla.redhat.co​m/show_bug.cgi?id=1150244>
11:46 soumya_ joined #gluster
11:47 overclk joined #gluster
11:55 msvbhat overclk++
11:55 glusterbot msvbhat: overclk's karma is now 1
11:55 ndevos REMINDER: Gluster Bug Triage meeting in #gluster-meeting starts in 5 minutes
11:56 hagarth ndevos: not in 65 minutes? the invite I have is for then.
11:56 ndevos hagarth: uh, no... 12:00 UTC
11:56 * ndevos checks
11:57 ndevos hagarth: maybe that invite has my local timezone in it?
11:57 LebedevRI joined #gluster
12:03 meghanam joined #gluster
12:03 meghanam_ joined #gluster
12:06 soumya joined #gluster
12:10 glusterbot New news from newglusterbugs: [Bug 1158037] brick failure detection does not work for ext4 filesystems <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158037>
12:13 virusuy joined #gluster
12:19 B21956 joined #gluster
12:19 gehaxelt joined #gluster
12:23 kdhananjay joined #gluster
12:25 theron joined #gluster
12:25 bala joined #gluster
12:27 overclk joined #gluster
12:40 glusterbot New news from newglusterbugs: [Bug 1155285] twitter link on community page broken <https://bugzilla.redhat.co​m/show_bug.cgi?id=1155285>
12:41 prasanth_ joined #gluster
12:51 rgustafs joined #gluster
12:58 Fen1 joined #gluster
12:58 dusmant joined #gluster
13:00 coredump joined #gluster
13:01 jdarcy joined #gluster
13:03 Thilam joined #gluster
13:05 overclk joined #gluster
13:05 haomaiwa_ joined #gluster
13:09 Thilam Hi guys, do you have an idea when the 3.5.3 debian packages will be released ?
13:10 Thilam It had to be 2 weeks ago but I don't see anything new, do you have an idea ?
13:10 julim joined #gluster
13:11 glusterbot New news from newglusterbugs: [Bug 1147236] gluster 3.6.0 compatibility issue with gluster 3.3 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1147236> || [Bug 1151696] mount.glusterfs fails due to race condition in `stat` call <https://bugzilla.redhat.co​m/show_bug.cgi?id=1151696> || [Bug 1158051] [USS]: files/directories with the name of entry-point directory present in the snapshots cannot be accessed <https://bugzilla.redh
13:11 kshlm joined #gluster
13:19 ilbot3 joined #gluster
13:19 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.or​g/p/GlusterFS-3.6-test-doc
13:22 bennyturns joined #gluster
13:24 plarsen joined #gluster
13:32 overclk joined #gluster
13:34 bala joined #gluster
13:43 theron joined #gluster
13:52 jobewan joined #gluster
13:59 overclk joined #gluster
14:04 haomaiwa_ joined #gluster
14:13 nbalachandran joined #gluster
14:18 failshell joined #gluster
14:22 msmith joined #gluster
14:23 wushudoin joined #gluster
14:25 xleo joined #gluster
14:28 RameshN joined #gluster
14:28 failshel_ joined #gluster
14:39 RameshN joined #gluster
14:41 glusterbot New news from newglusterbugs: [Bug 1158088] Quota utilization not correctly reported for dispersed volumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158088>
14:43 tdasilva joined #gluster
15:02 haomai___ joined #gluster
15:04 chirino joined #gluster
15:07 dusmantkp_ joined #gluster
15:19 DV_ joined #gluster
15:20 dusmantkp__ joined #gluster
15:21 zerick joined #gluster
15:26 zerick joined #gluster
15:49 zerick joined #gluster
16:00 n-st joined #gluster
16:01 calisto joined #gluster
16:01 Pupeno joined #gluster
16:03 Pupeno joined #gluster
16:09 meghanam joined #gluster
16:09 meghanam_ joined #gluster
16:09 andreask1 joined #gluster
16:10 andreask joined #gluster
16:10 _dist joined #gluster
16:11 _dist joined #gluster
16:11 glusterbot New news from newglusterbugs: [Bug 1158120] Data corruption due to lack of cache revalidation on open <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158120> || [Bug 1158123] Quick-read translator does not invalidate its cache <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158123> || [Bug 1158126] md-cache checks for modification using whole seconds only <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158126> || [Bug 1158129]
16:12 xleo left #gluster
16:13 sputnik13 joined #gluster
16:15 andreask left #gluster
16:18 semiosis @later tell jwillis0720 please ask your question in channel
16:18 glusterbot semiosis: The operation succeeded.
16:36 ira joined #gluster
16:41 glusterbot New news from newglusterbugs: [Bug 1158130] Not possible to disable fopen-keeo-cache when mounting <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158130>
16:50 sputnik13 joined #gluster
16:59 Intensity joined #gluster
17:02 jobewan joined #gluster
17:19 lmickh joined #gluster
17:21 chirino joined #gluster
17:28 David_H_Smith joined #gluster
17:28 Pupeno_ joined #gluster
17:29 David_H_Smith joined #gluster
17:38 B21956 joined #gluster
17:44 _Bryan_ joined #gluster
17:45 Pupeno joined #gluster
17:46 MrAbaddon joined #gluster
17:50 chirino joined #gluster
17:59 lalatenduM joined #gluster
18:06 verboese joined #gluster
18:16 jobewan joined #gluster
18:20 zerick joined #gluster
18:24 ricky-ti1 joined #gluster
18:30 chirino joined #gluster
18:56 klaas joined #gluster
19:22 jobewan joined #gluster
19:23 MrAbaddon joined #gluster
19:39 MrAbaddon joined #gluster
19:41 rotbeard joined #gluster
19:47 lalatenduM joined #gluster
19:48 ricky-ticky1 joined #gluster
19:50 sijis joined #gluster
19:54 quique joined #gluster
19:57 quique on centos i'm getting package dependency error trying to install glusterfs-server
19:57 semiosis centos version?  glusterfs version?
19:58 quique centos 6.5 gluster 3.5.2
19:58 quique how does the centos updates have glusterfs-libs-3.6.0.29-2.el6.x86_64
19:59 quique before the gluster-epel repo?
19:59 plarsen joined #gluster
19:59 JoeJulian @stupid redhat
19:59 semiosis @latest
19:59 glusterbot semiosis: The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
20:00 semiosis 3.6.0 isn't even GA yet!!!
20:00 johnmark oy
20:01 quique JoeJulian, semiosis: http://fpaste.org/145941/
20:01 glusterbot Title: #145941 Fedora Project Pastebin (at fpaste.org)
20:01 JoeJulian @learn stupid redhat as For no good reason, Red Hat packaged their storage product with the name gluster and versions inconsistent with upstream. The downstream (RHEL/CentOS) gluster.*3.6.* packages need to be overridden with yum priorities. Just set the gluster (upstream) repo with a lower priority than the base and update repos.
20:01 glusterbot JoeJulian: The operation succeeded.
20:01 semiosis ouch
20:02 semiosis JoeJulian: does this let you hate ubuntu any less?
20:02 JoeJulian no
20:02 semiosis didnt think so
20:02 JoeJulian It's still a stupid decision, but it's less of a problem than blocking during boot on a headless server, or activating unconfigured services.
20:04 semiosis systemd should fix all that
20:05 JoeJulian ... plus ubuntu uses stupid names for releases instead of version numbers.... ;)
20:06 jwillis_ joined #gluster
20:06 _dist JoeJulian: they use both, but they are stupid. Oh you mean in the packages... yeah I hate that.
20:06 rshott joined #gluster
20:07 JoeJulian The only valid use for a pangolin: https://c2.staticflickr.com/6/5​178/5508498285_df55c082f8_b.jpg
20:07 semiosis JoeJulian: version numbers are YY.MM and released like clockwork every 6 months
20:08 JoeJulian deb http://ubuntu.cloud.io.com/mir​ror/archive.ubuntu.com/ubuntu/ precise main restricted universe multiverse
20:09 SOLDIERz joined #gluster
20:09 JoeJulian Where's the version number in that?
20:09 jwillis_ can anyone tell me how bad of an idea it is to use a partition that has a brick on it to be used to store data from a straight mount. In other words /export/myvol/brick is in a gluster volume, but /export/myvol/someotherdata is not.
20:09 jwillis_ As someotherdata grows, would gluster be able to rebalance?
20:09 JoeJulian It won't automatically rebalance, no.
20:10 semiosis furthermore, glusterfs balances files, not bytes
20:10 JoeJulian But if you rebalance (and I'm not sure I would yet), though, it wont' move a file from a less-full brick to a more-full one.
20:11 jwillis_ ok thank you
20:11 jwillis_ !
20:12 JoeJulian I would use lvm to make those two different partitions and allocate extents as necessary.
20:12 jwillis_ yes thats what I will probably do
20:12 JoeJulian ( which is actually how I did it ).
20:13 jwillis_ im trying to set up a mongo server to read/write to one of my nodes, but the minimal performance hit I get from making them part of a gluster volume might be an issue...so i'll just partition some space for mongo
20:14 semiosis afaik, mongo has it's own replication & distribution, so no reason to run that over gluster
20:14 JoeJulian precisely what he's suggesting.
20:14 semiosis s/distribution/sharding/
20:14 glusterbot What semiosis meant to say was: afaik, mongo has it's own replication & sharding, so no reason to run that over gluster
20:15 JoeJulian I always like to name one mongo server, "Lloyd".
20:15 jwillis_ i know, but all of the nodes are being used by other people in the lab, i would have to partition each node for sharding
20:18 jwillis_ @semiosis - do you know why apt-get puts glusterd in /usr/sbin instead of /etc/init.d?
20:19 semiosis glusterd is a symlink to the glusterfsd executable
20:19 semiosis iirc
20:19 lmickh joined #gluster
20:20 semiosis furthermore, on ubuntu, the initscrip you're looking for would be called glusterfs-server, rather than glusterd, but it has been replaced with an upstart job, which you can find in /etc/init/glusterfs-server.conf
20:20 SOLDIERz joined #gluster
20:20 jwillis_ thats what I was looking for, thank you
20:20 semiosis yw
20:20 semiosis why?
20:21 semiosis are you having some problem?
20:22 jwillis_ i was using update-rc.d glusterd defaults
20:22 jwillis_ to automatically start glusterfs-server
20:22 jwillis_ but you have already configured it to start
20:22 semiosis yes
20:23 jwillis_ so i don't need to do anything
20:23 semiosis correct
20:23 jwillis_ anytime the server reboots it will autostart
20:23 jwillis_ which is great
20:23 semiosis :D
20:26 SOLDIERz joined #gluster
20:30 failshell joined #gluster
20:37 calisto joined #gluster
20:42 failshel_ joined #gluster
21:06 gildub joined #gluster
21:20 jobewan joined #gluster
21:23 badone joined #gluster
21:25 Pupeno joined #gluster
21:37 badone joined #gluster
21:37 davemc the etherpad is updated for tomorrows community meeting. Please feel free to review it and offer modifications
21:38 davemc Community meeting tomorrow at 12:00 UTC
21:38 davemc etherpad is at https://public.pad.fsfe.org/​p/gluster-community-meetings
21:38 glusterbot Title: FSFE Etherpad: public instance (at public.pad.fsfe.org)
21:42 glusterbot New news from newglusterbugs: [Bug 1158226] Inode infinite loop leads to glusterfsd segfault <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158226>
21:48 JoeJulian omg... davemc, I have a midnight change and you're going to make me drag my sorry butt out of bed at 5am for these... <sigh>
21:54 davemc JoeJulian, equal sigh here.  At least since based on UTC, the time here doesn't shift
21:55 JoeJulian Doesn't help me. I work on MST. :D
21:55 ws2k3 davemc is it here in irc or ?
21:55 davemc in irc, #gluster-meeting
21:55 JoeJulian I live in Seattle and work in Phoenix.
21:55 davemc JoeJulian,  ah.  I'm in Emerlad Hills CA, so PDT
21:56 davemc s/rlad/rald/
21:56 glusterbot What davemc meant to say was: JoeJulian,  ah.  I'm in Emerald Hills CA, so PDT
21:56 davemc PDT till this weekend
21:56 JoeJulian Yeah, I'm sure I'm going to start annoying everyone around me when I always don't know what time it is here.
21:57 davemc ws2k3, hope you can join us
21:57 davemc JoeJulian, tomorrow will be focused on final 3.6.0 readiness, so could be good
21:58 davemc or you can blanket approve it now and sleep in <grin>
22:00 JoeJulian Hehe
22:02 m0zes joined #gluster
22:05 badone joined #gluster
22:13 theron joined #gluster
22:20 julim joined #gluster
22:30 plarsen joined #gluster
22:41 Pupeno joined #gluster
22:41 Pupeno joined #gluster
23:44 David_H_Smith joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary