Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 mtanner joined #gluster
00:08 psyl0n joined #gluster
00:12 gmcwhistler joined #gluster
00:20 fisherrr joined #gluster
00:27 mtanner_ joined #gluster
00:34 _pol joined #gluster
00:35 pureflex joined #gluster
00:56 psyl0n joined #gluster
01:21 mattapp__ joined #gluster
01:23 mtanner joined #gluster
01:34 psyl0n joined #gluster
02:07 bharata-rao joined #gluster
02:18 vpshastry joined #gluster
02:37 pureflex joined #gluster
02:37 shubhendu joined #gluster
02:38 harish joined #gluster
03:01 gdubreui joined #gluster
03:12 21WABX8OH joined #gluster
03:14 JoeJulian sashko: I think that should cause split-brain. The extended attributes include metadata that determines if there are pending changes that need to be healed. The correct way is to replace with a blank brick and to a find on the client mountpoint, or do a ,,(targeted self-heal)
03:14 glusterbot sashko: I do not know about 'targeted self-heal', but I do know about these similar topics: 'targeted self heal'
03:14 JoeJulian ~targeted self heal | sashko
03:14 glusterbot sashko: http://community.gluster.org/a/howto-targeted-self-heal-repairing-less-than-the-whole-volume/
03:19 kshlm joined #gluster
03:23 harish joined #gluster
03:24 saurabh joined #gluster
03:31 sashko JoeJulian: I know, but I think split-brain is only if both bricks have trusted.afr set to non-zero?
03:31 sashko JoeJulian: the healthy one has it all set to zero, so it might work, I'll try it
03:32 sashko the reason why I can't do empty brick is because 3.2 is blocking when doing self-heal and I can't afford that long of a downtime
03:40 meghanam_ joined #gluster
03:40 meghanam joined #gluster
03:42 itisravi joined #gluster
03:58 kanagaraj joined #gluster
04:05 shylesh joined #gluster
04:34 ppai joined #gluster
04:35 MiteshShah joined #gluster
04:37 pureflex joined #gluster
04:39 RameshN joined #gluster
04:42 mtanner_ joined #gluster
04:49 zerick joined #gluster
04:51 pureflex joined #gluster
04:52 hagarth joined #gluster
04:53 bala joined #gluster
04:54 pureflex joined #gluster
05:02 bala joined #gluster
05:08 kdhananjay joined #gluster
05:08 rastar joined #gluster
05:11 ababu joined #gluster
05:14 kshlm joined #gluster
05:18 psharma joined #gluster
05:24 aravindavk joined #gluster
05:37 lalatenduM joined #gluster
05:39 CheRi joined #gluster
05:41 achuz joined #gluster
05:42 kaushal_ joined #gluster
05:44 raghu joined #gluster
05:45 pureflex joined #gluster
05:45 mohankumar joined #gluster
05:49 achuz joined #gluster
05:51 mikedep333 joined #gluster
05:55 bulde joined #gluster
05:58 raghug joined #gluster
05:58 vpshastry joined #gluster
06:12 ngoswami joined #gluster
06:20 mikedep333 joined #gluster
06:24 raghu joined #gluster
06:25 dusmant joined #gluster
06:27 glusterbot New news from newglusterbugs: [Bug 1024369] Unable to shrink volumes without dataloss <https://bugzilla.redhat.com/show_bug.cgi?id=1024369>
06:29 prasanth joined #gluster
06:31 _pol joined #gluster
06:37 raghug joined #gluster
06:40 ngoswami joined #gluster
06:41 anands joined #gluster
06:49 shubhendu joined #gluster
06:50 kshlm joined #gluster
06:51 _pol joined #gluster
06:53 raghug joined #gluster
06:54 hagarth joined #gluster
06:57 glusterbot New news from newglusterbugs: [Bug 1043373] brick server kenal panic <https://bugzilla.redhat.com/show_bug.cgi?id=1043373>
07:00 shruti joined #gluster
07:00 kshlm joined #gluster
07:27 zeittunnel joined #gluster
07:32 jtux joined #gluster
07:35 ricky-ticky1 joined #gluster
07:46 pureflex joined #gluster
08:01 hagarth joined #gluster
08:10 FarbrorLeon joined #gluster
08:23 spandit joined #gluster
08:31 harish_ joined #gluster
08:37 mgebbe_ joined #gluster
08:46 shylesh joined #gluster
08:57 andreask joined #gluster
08:59 vimal joined #gluster
09:04 warci joined #gluster
09:10 abyss^ I moved one glusterfs server to another location and start heal full. Files are replicate properly but on first gluster (which was moved) I have much less( GB) data than second gluster. I measure size and I cover that directory '.glusterfs' on first server is much less than second server... It's normal? Why?
09:10 satheesh1 joined #gluster
09:13 Slasheri joined #gluster
09:13 Slasheri joined #gluster
09:17 ndevos abyss^: that is uncommon, unless there are orphan gfid-files in the .glusterfs/??/??/ directories
09:19 ndevos abyss^: I think you can use a command like this to check: find .glusterfs/??/??/ -type f -links 1
09:20 ndevos abyss^: each file in the .glusterfs/??/??/ directory should be a hard-link to the actual file - if there is a file with only one link, it does not have a 'real' file on the brick
09:22 ndevos abyss^: those 'real' files could be missing in case not all directories are re-contructed on the brick, but you can probably see that from errors in the self-heal
09:22 abyss^ ndevos: OK. Thank you. But what I should do with orphan files? Just delete?
09:23 abyss^ self-heal return nothing just everything is ok and now I have 0 files to heal (so everything is fine).
09:26 ndevos abyss^: personally I would inspect those files, check its contents/owner/permissions/xattrs/date/... and see if they really should be gone - then (backup? and) delete them
09:27 cyberbootje hi
09:27 glusterbot cyberbootje: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:27 abyss^ ndevos: Ok. Thank you for your advice:)
09:28 cyberbootje If i have 1 storage, and want to replica 2 setup in the future, can i do that by adding a second storage ?
09:31 abyss^ ndevos: BTW: so when I have 1TB data then I have 1TB copy of this data in .glusterfs directory?;) So if I have 1TB data then I have to prepare 2TB of disk?;)
09:34 abyss^ cyberbootje: check;) Make two VMs and check if you can add to one. I don't know if it possible to make gluster from one server (command making volume needs at least two servers). Of course you could do smth like this: create volume from two servers then detach second and in future attach or replace-brick....
09:44 mgebbe_ joined #gluster
09:46 kevein joined #gluster
09:46 ndevos abyss^: no, the files under .glusterfs/??/??/* are hardklinks, they share the same inode as the use-visible 'real' filename
09:47 pureflex joined #gluster
09:47 ndevos s/use-visible/user-visible/
09:47 glusterbot ndevos: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
09:48 ndevos abyss^: check 'man 2 link' for details about hard-links :)
09:50 mtanner joined #gluster
09:51 shubhendu joined #gluster
09:54 _pol joined #gluster
09:54 warci joined #gluster
09:54 prasanth joined #gluster
09:54 dusmant joined #gluster
09:54 bulde joined #gluster
09:54 ababu joined #gluster
09:54 badone joined #gluster
09:54 DV__ joined #gluster
09:54 semiosis joined #gluster
09:54 JordanHackworth joined #gluster
09:54 bivak joined #gluster
09:54 kdhananjay joined #gluster
10:07 abyss^ ndevos: ok, you're right my bad:)
10:08 abyss^ that was stupid question ;)
10:10 MiteshShah joined #gluster
10:13 psyl0n joined #gluster
10:18 gdubreui joined #gluster
10:18 _pol joined #gluster
10:22 calum_ joined #gluster
10:28 badone joined #gluster
10:36 XATRIX joined #gluster
10:39 XATRIX Hi, can you tell me, do i have to setup "distribute translator" or it's a self-efficient module, which i don't have to setup by myself ?
10:41 samppah XATRIX: you don't have to do anything special to set it up.. it's automatically used if you make distributed volume
10:42 harish joined #gluster
10:46 XATRIX samppah: by "distributed volume" you mean stripe or replica ?
10:47 samppah XATRIX: neither.. distribute is it's own volume type
10:52 samppah XATRIX: if you create new volume without defining stripe or replica, it's created as distribute
10:52 XATRIX Ah,... the main difference between is when you create the volume by #gluster create volume datastorage [replica 2] transport...
10:52 XATRIX Yea
10:53 XATRIX But replica won't give me a such performance as distribute ?
10:54 XATRIX Also, if i decide to use distribute, how can i insure myself against disk failures ?
10:54 samppah you can combine replica and distribute together
10:54 XATRIX can i use raid for mirror and later put a gluster on a md0 partition ?
10:55 samppah if you add more bricks to replica set then it's "changed" to distribute-replicate unless you increase replica value
10:55 XATRIX I have only 4 disks
10:55 XATRIX What if i create 2 mdraid1 mirrors
10:55 samppah how many servers you have?
10:55 XATRIX 2
10:56 samppah 4 disks per server or just 2?
10:56 XATRIX My server has 1x500Gb 2x1TB(for storage) , i have 2 servers
10:56 samppah okay
10:58 samppah XATRIX: personally i'd probably use raid1 and also replicate it with glusterfs
10:58 XATRIX what if i create 1TB mdraid1 array ? put ext4 on it, and create replica 2 volume between ?
10:59 samppah yeah, exactly like that
10:59 samppah probably use xfs instead of ext4 if possible
10:59 XATRIX Yes, that's what i made. But in fuse.gluster i have not good performance...
10:59 XATRIX Gonna disable apc.stat()
11:00 XATRIX And check again, but NFS gives much more better "speed" as for PHP and other stuff
11:00 samppah @php
11:00 glusterbot samppah: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
11:00 glusterbot samppah: --fopen-keep-cache
11:00 XATRIX xfs does't support quotas
11:02 bharata-rao joined #gluster
11:03 samppah XATRIX: okay, i'm not sure what's the status with glusterfs & ext4
11:04 vpshastry1 joined #gluster
11:04 XATRIX It works as longs as i know
11:04 XATRIX But i need linux quotas to work with
11:05 franc joined #gluster
11:06 diegows joined #gluster
11:11 shubhendu joined #gluster
11:21 harish_ joined #gluster
11:22 eseyman joined #gluster
11:31 XATRIX samppah: did you benchmark you xfs performance against ext3/4 ?
11:32 samppah XATRIX: unfortunately no :( ext4 was broken with glusterfs when we installed our system
11:34 MiteshShah joined #gluster
11:36 XATRIX Hm
11:36 XATRIX Ok, i'll test it myself
11:40 Norky joined #gluster
11:43 kanagaraj joined #gluster
11:44 satheesh1 joined #gluster
11:47 XATRIX samppah: can i use Distribute on a 2 bricks ?
11:47 samppah XATRIX: yes
11:48 pureflex joined #gluster
11:48 XATRIX If i put 2xDisks on a mdraid1 So i will have 2 arrays, can i distribute among them ?
11:48 samppah yep
11:48 XATRIX If one of disks will fail, it shouldn't kill my volume
11:48 samppah no.. but if server dies then files on that brick aren't accessible
11:48 XATRIX crap :(
11:50 XATRIX Then i have to sacrifice my speed a bit ;(
11:51 RameshN_ joined #gluster
11:53 raghug joined #gluster
11:59 pk joined #gluster
12:04 edward2 joined #gluster
12:04 harish joined #gluster
12:08 raghug joined #gluster
12:10 itisravi joined #gluster
12:15 ccha4 what means thinnified client ?
12:17 MiteshShah joined #gluster
12:18 ccha4 ok found information https://gist.github.com/avati/af04f1030dcf52e16535#file-plan-md
12:18 glusterbot Title: GlusterFS 4.0 (at gist.github.com)
12:19 vpshastry1 joined #gluster
12:20 ppai joined #gluster
12:20 CheRi joined #gluster
12:24 dneary joined #gluster
12:28 ngoswami joined #gluster
12:29 bulde joined #gluster
12:36 MiteshShah joined #gluster
12:46 TvL2386 joined #gluster
12:47 MiteshShah joined #gluster
12:51 sroy joined #gluster
12:59 vpshastry1 joined #gluster
13:04 social root     23487  0.0  4.2 531436 339608 ?       Ssl  13:46   0:00 /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/94957a59fcf4142de8f51a9b4642208c.socket
13:05 social ^^ 300MB RES is quite a lot for nfs daemon which we aren't using isn't it?
13:11 zeittunnel joined #gluster
13:20 Remco Probably the default cache size or something
13:20 Remco Disable nfs if you don't need it
13:22 prasanth joined #gluster
13:28 ira joined #gluster
13:30 davidbierce joined #gluster
13:35 social Remco: if I disable it it spams logs, I think I have bug open for that
13:44 jtux joined #gluster
13:48 vpshastry joined #gluster
13:48 pureflex joined #gluster
13:50 sroy joined #gluster
13:54 jtux joined #gluster
13:54 leem joined #gluster
13:57 pk left #gluster
13:59 jtux joined #gluster
14:00 vpshastry left #gluster
14:02 psyl0n joined #gluster
14:05 bennyturns joined #gluster
14:16 dusmant joined #gluster
14:16 plarsen joined #gluster
14:17 edong23__ joined #gluster
14:18 johnmark oh hai
14:18 JMWbot johnmark: @3 purpleidea reminded you to: thank purpleidea for an awesome JMWbot (please report any bugs) [70616 sec(s) ago]
14:18 JMWbot johnmark: Use: JMWbot: @done <id> to set task as done.
14:19 jiqiren_ joined #gluster
14:19 purpleidea johnmark: haha i guess it works
14:19 compbio_ joined #gluster
14:19 johnmark LOL
14:19 purpleidea (it should pm you the private reminders people add)
14:19 johnmark why yes, yes it did
14:19 purpleidea isn't that considerate of me!
14:19 johnmark um, sure?
14:19 purpleidea johnmark: if you have trouble using the ui, let me know
14:19 johnmark my wife will love it
14:19 _pol joined #gluster
14:19 Rydekull_ joined #gluster
14:19 johnmark I'll make sure not to tell her about it - would get dangerous
14:19 purpleidea when you @done a task, it will pm the user that set it to tell them you closed it...
14:20 lanning_ joined #gluster
14:20 purpleidea johnmark: your wife use irc?
14:20 johnmark purpleidea: nope
14:20 purpleidea s/use/uses
14:20 johnmark just don't want to give her any ideas ;)
14:20 Amanda That's a dangerous game to play
14:20 johnmark indeed!
14:20 johnmark heh
14:20 johnmark although, this does give me ideas
14:20 ira No.. but worse, his co-workers use IRC!
14:20 purpleidea johnmark: i set it to only bug you up to once per day, but let me know if you want it more often or if it doesn't behave
14:20 johnmark haha
14:21 johnmark purpleidea: haha! sure thing
14:21 purpleidea ira: that's the idea! have at it...
14:21 johnmark purpleidea: I am actually wondering if this could be useful for, you know, other people in addition to me
14:21 johnmark purpleidea: like maybe this is a good way to do reminders for all community activity
14:21 purpleidea ira: use: JMWbot: @remind blah blah blah
14:21 purpleidea johnmark: i think other bots must have this sort of functionality... i dunno
14:21 ira purpleidea: Never give me such powers... ;)
14:22 johnmark ira: oh dear
14:22 ira I'm joking.... :)
14:22 purpleidea johnmark: i hacked this together to only support bugging one user (you) but i could have extended it to be more general. but then it stops becoming a hack and becoming a project.
14:22 purpleidea ira: you. know. you. love. THE POWER!
14:22 johnmark purpleidea: exactly! then you're on the hook!
14:23 ira purpleidea: And suddenly, it uploads to a website...
14:23 ira And you are doomed.
14:23 purpleidea johnmark: fund it if you want it !
14:23 portante_ joined #gluster
14:23 ternovka joined #gluster
14:23 B21956 joined #gluster
14:23 johnmark purpleidea: but seriously, I wonder if JoeJulian would be interested in adding this type of capability to glusterbot
14:23 purpleidea johnmark: so about your reminders!
14:23 johnmark purpleidea: oh I know, I know
14:23 purpleidea johnmark: glusterbot is supybot
14:23 johnmark purpleidea: right. is this not based on supybot?
14:24 purpleidea johnmark: no
14:24 johnmark purpleidea: aha, ok
14:24 johnmark purpleidea: what did you do it with?
14:24 purpleidea johnmark: http://gluster.org/pipermail/gluster-users/2013-December/038329.html
14:24 glusterbot Title: [Gluster-users] Introducing... JMWBot (the alter-ego of johnmark) (at gluster.org)
14:24 purpleidea (python-twisted)
14:25 jruggiero joined #gluster
14:26 ndevos purpleidea: you should have called it JWMBug :)
14:26 ndevos s/JWM/JMW/
14:26 glusterbot What ndevos meant to say was: purpleidea: you should have called it JMWBug :)
14:26 Rydekull joined #gluster
14:27 johnmark ndevos: *sigh*
14:27 purpleidea ndevos: it took a lot of self-control to not implement a @harass action
14:28 purpleidea ndevos: if johnmark doesn't refund me soon, i might have to patch it.
14:28 johnmark purpleidea: ah geez
14:28 ndevos purpleidea: good idea!
14:28 johnmark this is getting dangerous...
14:28 purpleidea JMWbot: @remind remind purpleidea to implement a @harass action for JMWbot
14:28 JMWbot purpleidea: Okay, I'll remind johnmark when I see him. [id: 5]
14:29 purpleidea haha awesome. i could use this to get johnmark to remind me of things that _I_ have to do!
14:29 glusterbot New news from newglusterbugs: [Bug 1039544] [FEAT] "gluster volume heal info" should list the entries that actually required to be healed. <https://bugzilla.redhat.com/show_bug.cgi?id=1039544>
14:29 johnmark incidentally, I set aside tomorrow to do nothing but close whatever open accounts I have with purpleidea, Ramereth, ndevos, and udo
14:29 johnmark purpleidea: indeed :D
14:29 purpleidea johnmark use https://developer.valvesoftware.com/wiki/Valve_Time
14:29 glusterbot Title: Valve Time - Valve Developer Community (at developer.valvesoftware.com)
14:29 purpleidea FWIW johnmark uses https://developer.valvesoftware.com/wiki/Valve_Time
14:30 johnmark purpleidea: thanks!
14:30 purpleidea johnmark: so, about hosting for ndevos and i on download.gluster.o or similar?
14:31 lalatenduM joined #gluster
14:31 vpshastry joined #gluster
14:31 johnmark purpleidea: I cna give you an account
14:31 PatNarciso joined #gluster
14:31 purpleidea johnmark: cool, let's do it
14:31 johnmark purpleidea: but if you upload pr0n...
14:31 johnmark ;)
14:31 purpleidea johnmark: puppet porn!
14:32 purpleidea (don't google that, it probably won't end well, not that i've tried)
14:32 johnmark eek
14:32 johnmark purpleidea: did you get my hangout invite? I want to see if you cna join wihtout having to do the g+ crap
14:32 jtux joined #gluster
14:33 _BryanHM_ joined #gluster
14:33 andreask joined #gluster
14:33 johnmark purpleidea: wait, I don't understand what URL you sent me
14:34 purpleidea johnmark: what url?
14:34 johnmark purpleidea: I thought you were sending me to the source code repo for the bot
14:34 johnmark no idea what https://developer.valvesoftware.com/wiki/Valve_Time is about
14:34 glusterbot Title: Valve Time - Valve Developer Community (at developer.valvesoftware.com)
14:34 purpleidea johnmark: oh shit, i just notices the PM's. sorry
14:35 johnmark ha
14:35 purpleidea johnmark: valve time is a valve joke.. read it it's funny
14:35 johnmark ohhh... ok
14:36 purpleidea valve says they'll do something in X time, but it always ends up being X+Y where y is some other time.
14:36 johnmark purpleidea: oh! haha... yeah, Ok, I get it :P
14:38 XATRIX mdraid1->LVM->XFS->Glusterfs  2 node cluster, fuse.glusterfs, copy of thousands small files ~2.16-3.20MB\s
14:38 XATRIX very approx 80-100 files\sec
14:39 XATRIX 1329M gonna copy in 7minutes approx
14:39 XATRIX time is growing up
14:39 XATRIX 1GbE connection between
14:41 radez if I add bricks to my cluster ad rebalance should a mounted fuse volume show more available space?
14:41 kaptk2 joined #gluster
14:45 johnmark ndevos: purpleidea: you can now upload stuff to your home dirs on dl.g.o
14:45 johnmark will send particulars
14:45 ndevos johnmark: oh, I have a homedir there?
14:45 ndevos thanks?
14:45 purpleidea ndevos: haha same comment ^
14:46 ndevos purpleidea: :)
14:47 B219561 joined #gluster
14:50 vpshastry left #gluster
14:59 jruggiero joined #gluster
15:03 bala joined #gluster
15:07 dbruhn joined #gluster
15:13 wushudoin joined #gluster
15:13 morse_ joined #gluster
15:14 lanning joined #gluster
15:14 Rydekull_ joined #gluster
15:14 tjikkun joined #gluster
15:14 brosner_ joined #gluster
15:14 tjikkun joined #gluster
15:15 plarsen joined #gluster
15:15 REdOG joined #gluster
15:15 plarsen joined #gluster
15:15 askb joined #gluster
15:16 plarsen joined #gluster
15:16 anands joined #gluster
15:24 X3NQ joined #gluster
15:26 ndk joined #gluster
15:27 dneary joined #gluster
15:27 xymox joined #gluster
15:29 glusterbot New news from newglusterbugs: [Bug 1043548] Cannot compile glusterfs on OpenSuse 13.1 <https://bugzilla.redhat.com/show_bug.cgi?id=1043548>
15:30 hybrid512 joined #gluster
15:34 ndevos johnmark: do you have an idea who would like to test CloudStack + Gluster? The latest patches make it possible to start VMs with QEMU+libgfapi and QCOW2 files on a Gluster Volume
15:35 ndevos latest patches as on https://forge.gluster.org/cloudstack-gluster/cloudstack (they need to be cleaned up before posting to the CloudStack devs)
15:35 glusterbot Title: cloudstack in cloudstack-gluster - Gluster Community Forge (at forge.gluster.org)
15:38 badone joined #gluster
15:39 tqrst O:
15:42 zerick joined #gluster
15:56 _ilbot joined #gluster
15:56 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
15:58 ira joined #gluster
15:58 mohankumar joined #gluster
16:00 davidbierce joined #gluster
16:02 jag3773 joined #gluster
16:02 dewey_ joined #gluster
16:05 Oneiroi joined #gluster
16:10 jruggiero joined #gluster
16:10 GLHMarmo1 joined #gluster
16:11 aliguori joined #gluster
16:11 XATRIX Hm... if i mount my partition using NFS, i can't even start my OpenVZ container (
16:12 Remco_ joined #gluster
16:12 elyograg_ joined #gluster
16:13 dbruhn joined #gluster
16:14 vpshastry joined #gluster
16:15 klaas joined #gluster
16:17 codex_ joined #gluster
16:20 _pol joined #gluster
16:21 gergnz_ joined #gluster
16:21 _br_ joined #gluster
16:21 nikkk joined #gluster
16:21 mohankumar joined #gluster
16:21 Oneiroi joined #gluster
16:21 davidbierce joined #gluster
16:21 vikumar joined #gluster
16:21 atrius joined #gluster
16:21 kbsingh joined #gluster
16:21 lava_ joined #gluster
16:21 edong23 joined #gluster
16:21 bivak joined #gluster
16:21 JordanHackworth joined #gluster
16:21 semiosis joined #gluster
16:21 sroy joined #gluster
16:21 schrodinger_ joined #gluster
16:21 X3NQ joined #gluster
16:21 askb joined #gluster
16:21 REdOG joined #gluster
16:21 brosner joined #gluster
16:21 Rydekull_ joined #gluster
16:21 lanning joined #gluster
16:21 johnmark joined #gluster
16:21 zwu joined #gluster
16:21 fen_ joined #gluster
16:21 eryc joined #gluster
16:21 AndreyGrebenniko joined #gluster
16:21 [o__o] joined #gluster
16:21 basic` joined #gluster
16:21 foster joined #gluster
16:21 micu3 joined #gluster
16:21 ccha4 joined #gluster
16:21 d-fence_ joined #gluster
16:21 cyberbootje joined #gluster
16:21 wgao__ joined #gluster
16:21 ultrabizweb joined #gluster
16:21 micu joined #gluster
16:21 JoeJulian joined #gluster
16:21 JonathanD joined #gluster
16:21 partner joined #gluster
16:21 klaxa joined #gluster
16:21 abyss^ joined #gluster
16:21 Dave2 joined #gluster
16:21 mjrosenb joined #gluster
16:21 social joined #gluster
16:21 FooBar joined #gluster
16:21 marcoceppi joined #gluster
16:21 primusinterpares joined #gluster
16:21 Ramereth joined #gluster
16:21 cicero joined #gluster
16:21 Peanut joined #gluster
16:21 davidjpeacock joined #gluster
16:21 paratai joined #gluster
16:21 twx joined #gluster
16:21 smellis joined #gluster
16:21 eightyeight joined #gluster
16:21 atrius` joined #gluster
16:21 dblack joined #gluster
16:21 aurigus joined #gluster
16:23 davidbierce Try migrating again...it might work :)
16:23 lyang0 joined #gluster
16:24 badone_ joined #gluster
16:24 ira joined #gluster
16:24 jag3773 joined #gluster
16:25 Cenbe_ joined #gluster
16:25 Slasheri joined #gluster
16:25 Slasheri joined #gluster
16:25 portante_ joined #gluster
16:26 gergnz joined #gluster
16:26 shdwdrgn_ joined #gluster
16:27 purpleidea joined #gluster
16:27 purpleidea joined #gluster
16:27 vpshastry left #gluster
16:28 stigchristian joined #gluster
16:29 tjikkun joined #gluster
16:29 tjikkun joined #gluster
16:29 _br_ joined #gluster
16:30 jruggiero left #gluster
16:30 Technicool joined #gluster
16:31 Norky joined #gluster
16:32 gmcwhistler joined #gluster
16:34 Slasheri_ joined #gluster
16:36 purpleid1a joined #gluster
16:37 XpineX joined #gluster
16:38 _br_ joined #gluster
16:38 plarsen joined #gluster
16:42 theron joined #gluster
16:42 tryggvil joined #gluster
16:43 stigchristian joined #gluster
16:43 badone_ joined #gluster
16:46 japuzzo joined #gluster
16:46 rwheeler joined #gluster
16:57 smellis davidbierce: haha, i get that.  Looking into the configurable base port option for gluster :)
16:58 pureflex joined #gluster
17:01 XATRIX samppah: Can i define a block device as a brick instead of a directory ?
17:04 smellis looks like it won't be available until 3.4.2
17:05 purpleidea joined #gluster
17:06 rwheeler joined #gluster
17:09 badone_ joined #gluster
17:11 Nuxr0 XATRIX: i doubt that's possible, you can format that block device and mount it in a dir, though :)
17:11 semiosis XATRIX: no.  glusterfs needs a posix filesystem with extended attribute support for brick backend storage
17:11 semiosis xfs is recommended
17:14 XATRIX xfs can't be reduced in a size
17:14 XATRIX I can't dynamically change the volume size
17:14 XATRIX 900GB/100GB and later on 500/400
17:14 Nuxr0 XATRIX: ext4 should be safe to use, for now
17:15 hagarth joined #gluster
17:15 XATRIX What about the speed ?
17:16 XATRIX Did you benchmark it against xfs
17:16 Nuxr0 no, i use xfs
17:16 Nuxr0 but i doubt ext4's speed will be your first bottleneck
17:16 Nuxr0 there are plenty of ext4 vs xfs benchmarks on the interwebz
17:22 diegows joined #gluster
17:22 XATRIX ext4 has an issue with proxmox 2.6.32 kernel
17:23 clemanso joined #gluster
17:23 XATRIX It's buggy in proxmox ve
17:23 XATRIX Ok, see you tomorrow
17:26 johnbot11 joined #gluster
17:29 bulde joined #gluster
17:30 _pol joined #gluster
17:31 Mo__ joined #gluster
17:33 _pol joined #gluster
17:33 hagarth joined #gluster
17:34 tryggvil joined #gluster
17:37 GabrieleV joined #gluster
17:37 aravindavk joined #gluster
17:44 rotbeard joined #gluster
17:46 ndk joined #gluster
17:46 badone_ joined #gluster
17:48 SpeeR joined #gluster
17:52 tryggvil joined #gluster
17:54 theron joined #gluster
17:56 johnbot11 joined #gluster
18:01 bdperkin joined #gluster
18:10 prasanth joined #gluster
18:22 _pol joined #gluster
18:23 zaitcev joined #gluster
18:23 Ravage joined #gluster
18:24 Ravage hi. im trying to create a stripe volume with existing data on the first brick. is there any recommended procedure to to so?
18:24 Ravage *do
18:26 Ravage gluster version 3.4.1 btw
18:27 JoeJulian Ravage: impossible. Create your volume and copy your files onto a client mountpoint.
18:27 Paul-C joined #gluster
18:27 JoeJulian Ravage: Also, ,,(stripe)
18:27 glusterbot Ravage: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
18:27 Ravage thats quite annoying. 7tb of data.. but okay
18:28 galarexpo joined #gluster
18:28 JoeJulian You couldn't do that with a jbod coverting it to a raid0
18:28 JoeJulian converting even... covertly or not... (/me can't type today)...
18:30 glusterbot New news from newglusterbugs: [Bug 1032122] glusterd getting oomkilled <https://bugzilla.redhat.com/show_bug.cgi?id=1032122> || [Bug 903873] Ports show as N/A in status <https://bugzilla.redhat.com/show_bug.cgi?id=903873>
18:30 Ravage i read your blog post about striping. should work for me. both servers have their own raid5 and i just want to expand the total space
18:30 JoeJulian total space, or space for an individual file? total space is better served with distribute.
18:31 jbd1 joined #gluster
18:31 Ravage distribute is an option. but i guess i still have the problems with my existing data?
18:31 JoeJulian distribute won't be a problem.
18:32 JoeJulian Create your distribute volume with the populated disk as the first brick.
18:33 JoeJulian You'll probably want to rebalance the volume after it's up and running.
18:33 Ravage okay let me try that
18:35 _pol joined #gluster
18:36 Ravage works like a charm. really didnt understand the distribute / replica 2 thing on first read
18:37 psyl0n joined #gluster
18:37 Ravage thank you JoeJulian
18:37 JoeJulian You're welcome
18:39 cfeller joined #gluster
18:40 ianweller joined #gluster
18:41 ianweller hi everyone. i'm having trouble enabling SSL in my gluster installation
18:41 johnbot11 joined #gluster
18:41 ianweller i set "option transport.socket.ssl-enabled on" in /etc/glusterfs/glusterd.vol, and the relevant log file says SSL is enabled
18:42 ianweller however when i try to run something like "gluster volume list", /var/log/glusterfs/cli.log says that SSL is not enabled, and it fails to connect (clearly)
18:49 ianweller if anybody has any suggestions, that would be wonderful. i have no idea how to get /usr/sbin/gluster to pay attention to the fact that it needs to connect via SSL
18:49 ianweller (this is on centos 6, gluster 3.4.1)
18:49 JoeJulian Odd, too, since it uses a named pipe.
18:50 ianweller /usr/sbin/gluster does?
18:50 johnbot11 joined #gluster
18:50 JoeJulian I'm mostly sure... You know... one of those things you "know" until you say it out loud...
18:50 t_dot_zilla joined #gluster
18:51 t_dot_zilla hello... is #glusterfs not a real chat?
18:51 JoeJulian It is not. It's supposed to redirect you here, is it not?
18:51 t_dot_zilla it didnt, it said the room was full
18:52 ianweller it's very difficult to find docs on enabling SSL :(
18:52 t_dot_zilla anyway... im wondering... if you have your glusterfs client configured to mount from a glusterfs server and that particular server goes down...does the client know to connect to another server in that cluster automatically?
18:52 t_dot_zilla like are the clients aware of all the servers in cluster even though they only connect to one?
18:52 JoeJulian @mount server
18:52 glusterbot JoeJulian: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
18:53 t_dot_zilla oh snap, that's hot
18:53 t_dot_zilla and the client can connect to any of the servers to get to ALL the servers?
18:53 t_dot_zilla basically you'd need EVERY single server for a single store to go down then
18:54 JoeJulian ianweller: yep, I was wrong...
18:54 t_dot_zilla i almost feel like having EVERY server in my cluster use the glusterfs
18:54 t_dot_zilla is a good idea
18:54 ianweller JoeJulian: any ideas on how to get it to use SSL?
18:54 * ianweller pastes the relevant cli.log part
18:54 JoeJulian ianweller: I'm looking at that while I'm waiting forever on hold for ASUS... :/
18:55 ianweller ugh, suck
18:55 t_dot_zilla my cluster is going to have a database server, multiple voip servers, multiple web servers... does glusterfs-server use a lot of CPU/bandwidth?
18:55 ianweller sucks*
18:55 ianweller JoeJulian: http://fpaste.org/62298/13872200/ if that helps at all
18:55 glusterbot Title: #62298 Fedora Project Pastebin (at fpaste.org)
18:55 JoeJulian t_dot_zilla: depends entirely on your i/o load.
18:55 t_dot_zilla not a lot
18:56 t_dot_zilla just simple document/media storage...
18:56 t_dot_zilla not a lot of concurrent usage of files either
18:56 JoeJulian Sounds like a good fit, so far.
19:10 rwheeler joined #gluster
19:10 NuxRo joined #gluster
19:17 smellis JoeJulian: are you aware of a work around on 3.4.1 to this problem: https://bugzilla.redhat.com/show_bug.cgi?id=987555
19:18 glusterbot Bug 987555: medium, urgent, 3.4.2, ndevos, MODIFIED , Glusterfs ports conflict with qemu live migration
19:18 smellis I suspect I'll have to wait for 3.4.2, but thought you might have some more insight
19:24 ianweller JoeJulian: did you happen to find anything? i'm confused where to go from here :(
19:28 johnbot11 joined #gluster
19:32 anands joined #gluster
19:42 dbruhn joined #gluster
19:44 ianweller hmm, i think it might be that gluster 3.4.1 doesn't support SSL over the management side just yet
19:44 ianweller (although one would think if cli.log says ssl isn't enabled... you would be able to enable it)
20:02 t_dot_zilla joined #gluster
20:02 JoeJulian ianweller: I'm with you. The cli does not have any method for enabling ssl. 3.5 has a much more configurable ssl on the schedule, but nothing in 3.4. Perhaps file a bug report.
20:02 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:03 ianweller and now i'm just getting SSL connect errors between the two servers. sigh
20:04 ianweller certificate verify failed?
20:04 leem /quit change of venue
20:04 t_dot_zilla anyone know how/why when i run "gluster peer probe gfs02", im getting "peer probe: failed: Error through RPC layer, retry again later"
20:05 t_dot_zilla ps aux | grep glu          ----> /usr/sbin/glusterd -p /var/run/glusterd.pid              on both servers
20:05 t_dot_zilla and the firewall is off
20:06 _pol joined #gluster
20:10 _pol joined #gluster
20:12 t_dot_zilla netstat -tap | grep gluster shows different ports on each server, wtf
20:12 t_dot_zilla they are exact same install!
20:13 t_dot_zilla wait i take that back
20:13 t_dot_zilla wrong versions oy... what happened to backward compatibility!
20:14 morsik backward?!
20:14 morsik do you compare 3.2 to 3.3?
20:22 ianweller left #gluster
20:23 anands joined #gluster
20:23 t_dot_zilla 3.4 to 3.2 don't work well together
20:26 theron joined #gluster
20:28 t_dot_zilla i'm unable to mount the datastores... oy
20:28 semiosis t_dot_zilla: no one ever said 3.4 was backward compatible with 3.2
20:28 t_dot_zilla they should change major release number if backward compatibility braks
20:28 t_dot_zilla breaks
20:29 semiosis you should read docs before upgrading
20:29 johnbot11 joined #gluster
20:30 semiosis see ,,(3.3 upgrade notes) for info on upgrading 3.2 to 3.3+
20:30 glusterbot http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/
20:30 t_dot_zilla meh i didn't upgrade just experimenting and surprised they weren't compatible
20:31 semiosis there's also ,,(3.4 upgrade notes)
20:31 glusterbot http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4/
20:31 semiosis why would you want to interop diff versions?  you should use the ,,(latest) on all clients & servers
20:31 glusterbot The latest version is available at http://download.gluster.org/pub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
20:35 SpeeR one of my bricks is still seeing a high usage compared to the others... if this brick fills up, will gluster know to put the next backups on a different brick?
20:36 t_dot_zilla what ports are required to be openon FW?
20:38 semiosis ,,(ports)
20:38 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:40 t_dot_zilla 49152& up??
20:40 t_dot_zilla what is the upper limit?
20:41 andreask joined #gluster
20:41 semiosis not sure
20:41 t_dot_zilla 49152:59152 ?
20:42 semiosis the limit is how many bricks you add to the server, not counting subtractions for bricks deleted
20:42 semiosis whats that limit?  i'm not sure
20:43 semiosis we've theorized based on fd limit, tcp socket limit, etc, but idk if anyone's actually tested the limit
20:43 t_dot_zilla confused
20:43 t_dot_zilla what is a good limit, 10,000 ?
20:43 t_dot_zilla just to be safe
20:49 semiosis i have 100
20:49 semiosis depends how paranoid you are
20:49 semiosis you could, of course, see what ports are actually in use, and only open those
20:49 semiosis i am slightly more lazy than paranoid so I just opened 100 ports
20:50 semiosis and have probably eaten about 1/3 of the way into that so far
20:50 t_dot_zilla is ther a gluster wiki?
20:51 neofob joined #gluster
20:56 semiosis @wiki
20:57 semiosis @learn wiki as http://www.gluster.org/community/documentation/index.php/Main_Page
20:57 glusterbot semiosis: The operation succeeded.
20:57 semiosis @wiki
20:57 glusterbot semiosis: http://www.gluster.org/community/documentation/index.php/Main_Page
20:57 semiosis t_dot_zilla: ^
21:00 t_dot_zilla #wikitywhack
21:00 t_dot_zilla semiosis: you maintain the ubuntu repo?
21:03 semiosis ubuntu + debian
21:03 semiosis direct all gripes at me :)
21:04 sroy_ joined #gluster
21:05 t_dot_zilla no gribe :) good job!
21:05 t_dot_zilla very easy to install 3.4 on both ubuntu and debian
21:06 t_dot_zilla i've heard 3.4 fixes a lot of stuff
21:09 theron joined #gluster
21:10 andreask joined #gluster
21:15 rwheeler joined #gluster
21:15 davidbierce semiosis:  I shall gripe at you that I can't just apt-get install gluster-swift and have object storage magic with no configuration :)
21:16 semiosis davidbierce: gripe noted
21:17 davidbierce I was expecting slap to reach me across the internet
21:21 mkzero we had a strange problem with our gluster live deployment during the last week. we have a folder that we want to remove which is clearly empty but on rmdir it returns a ENOTEMPTY error. any idea how this could happen?
21:25 mkzero okay, i just noticed that there are some ------T links left on one of the bricks. I'm curious how this could happen. can't really reproduce it right now..
21:25 tqrst mkzero: I can't remember the specifics (it's been a while since that last happened and I don't have my notes handy), but find -type f path/to/folder on every brick to make sure none of them have stray files in there.
21:25 semiosis davidbierce: lol
21:26 mkzero tqrst: thank you. just found some gluster link files in that folder
21:26 tqrst mkzero: anything in that brick's log?
21:27 mkzero tqrst: can't find anything relevant :/
21:27 mattapp__ joined #gluster
21:29 JoeJulian Check the dates on those files. I bet they're old.
21:29 semiosis ,,(link file)
21:29 glusterbot semiosis: Error: No factoid matches that key.
21:29 semiosis hrm
21:29 JoeJulian @sticky pointer
21:29 JoeJulian nope
21:29 tqrst ,,(sticky)
21:30 glusterbot tqrst: Error: No factoid matches that key.
21:30 JoeJulian Guess we never wrote one...
21:30 semiosis often caused by renaming a file
21:30 johnbot11 joined #gluster
21:32 tqrst semiosis: in other news, I finally managed to rebalance my whole volume without segfaults or OOMs
21:32 johnbot1_ joined #gluster
21:32 semiosis \o/
21:32 semiosis was there a trick to it, or did you just keep trying & get lucky?
21:33 tqrst I had to get rid of a few million files and a few thousand folders to get it down to something reasonable
21:33 mkzero oh.. wait. could those pointers be leftovers from a full brick that "swaps" data out to other nodes? not sure how that works, but that brick hit the full-ratio about a wee ago
21:34 JoeJulian mkzero: yep, that could potentially do it, I bet.
21:35 mkzero JoeJulian: btw is there any documentation on how gluster works exactly when one or more bricks are full? at the moment that seems just like some fairy magic to me and i really would like to understand that in full
21:36 JoeJulian Yes, but it's documented in C.
21:36 JoeJulian And I
21:36 JoeJulian ...
21:37 JoeJulian And I'm fairly sure it does, indeed, involve fairy magic.... Yep, there in the source, fairy_magic(file);
21:38 johnbot11 joined #gluster
21:39 mkzero Are you serious? :D
21:39 JoeJulian Tough crowd today...
21:42 tqrst the trick is to not run out of space
21:45 semiosis i got the 85% warning alert from nagios saturday, today expanding my bricks
21:46 mkzero tqrst: yeah, i'm running on a tight budget.. i already expanded those smaller bricks but i just don't have enough HW atm.
21:47 * m0zes is at 70%. will have to increase again soon. 100TB bricks are fun to expand.
21:52 mkzero JoeJulian: trust me, i've seen enough 'funny' code with function names like that. for a brief moment that sounded pretty believable. now.. any serious comment on what to search for in the code? i'm not familiar w/ glusters source.
21:57 badone joined #gluster
22:09 luis_silva joined #gluster
22:14 psyl0n joined #gluster
22:15 diegows joined #gluster
22:17 JoeJulian mkzero: in xlators/cluster/dht/src/dht-common.c I would probably start looking around references to dht_free_disk_available_subvol
22:17 psyl0n_ joined #gluster
22:19 JoeJulian mkzero: cross reference that with dht_free_disk_available_subvol
22:19 JoeJulian gah
22:19 JoeJulian s/dht_free_disk.*/dht_linkfile_create/
22:19 glusterbot What JoeJulian meant to say was: mkzero: cross reference that with dht_linkfile_create
22:23 theron joined #gluster
22:23 JoeJulian Actually, that's pretty straightforward. if !dht_is_subvol_filled, make the file where it belongs. else create the file on the least full subvol and create the linkfile.
22:29 digimer joined #gluster
22:37 tryggvil joined #gluster
22:40 MacWinner joined #gluster
22:43 theron joined #gluster
22:46 diegows joined #gluster
22:59 _pol_ joined #gluster
22:59 _pol__ joined #gluster
23:00 mattapp__ joined #gluster
23:05 gdubreui joined #gluster
23:08 mattapp__ joined #gluster
23:12 _pol joined #gluster
23:34 elyograg joined #gluster
23:38 _pol joined #gluster
23:38 SpeeR JoeJulian does this take the size of the file that is trying to be written into account? If I'm trying to write 7TB and only have 6TB left:  (!dht_is_subvol_filled (this, subvol)
23:49 semiosis i wouldn't count on it
23:50 SpeeR heh, I'm trying to remember the little bit of C that I've tried to forget,
23:51 SpeeR but I don't see anything grabbing the file
23:52 _pol joined #gluster
23:59 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary