Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 qdk joined #gluster
00:14 jobewan joined #gluster
00:19 B21956 joined #gluster
00:27 allgood-circ joined #gluster
00:27 allgood-circ left #gluster
00:27 allgood joined #gluster
00:28 allgood hi folks... i am reading gluster docs right now and i am with doubt on one thing
00:28 allgood can I have a distributed-replicated server with 3 nodes?
00:28 allgood in a way that any file are stored on any two servers
00:29 JoeJulian allgood: Sure. You have to have an even number of bricks, but you can have multiple bricks per servers.
00:29 JoeJulian As far as nodes, there's no restriction with regard to gluster how many network endpoints you have (a "node" is a network endpoint).
00:29 allgood JoeJulian: well... it isn't what I expected... but it will work for me
00:30 allgood JoeJulian: I was expecting to have a three bricks with replica 2... in a way that if one brick fails, it heals itself to a simply replicated one
00:31 allgood what happens if one brick fails on a replica 2 cluster with 4 bricks?
00:31 aub joined #gluster
00:32 allgood will it heal only after I replace the failed one?
00:33 JoeJulian correct
00:33 JoeJulian @mount order
00:33 glusterbot JoeJulian: I do not know about 'mount order', but I do know about these similar topics: 'mount server'
00:33 JoeJulian @brick order
00:33 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
00:36 allgood so, to have a 3 server with one volume, distributed & replicated I will need to create it something like this: server1/brick1 server2/brick2 server3/brick3 server1/brick4 server2/brick5 server3/brick6
00:36 allgood is it correct?
00:38 JoeJulian The naming is a bit confusing. I would prefer server1:brick1 server2:brick1 server2:brick2 server3:brick1 server3:brick2 server1:brick2
00:38 allgood think I could have stopeed at 4th brick.
00:38 allgood well... but the idea is correct then
00:39 JoeJulian yes
00:39 allgood great, it will work well for me... will use your naming scheme
00:39 JoeJulian A more visual representation
00:39 JoeJulian @lucky expanding a volume by one brick
00:39 glusterbot JoeJulian: http://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
00:40 JoeJulian glusterbot: thanks
00:40 glusterbot JoeJulian: you're welcome
00:40 JoeJulian glusterbot++
00:40 glusterbot JoeJulian: glusterbot's karma is now 4
00:40 allgood gusterbot++
00:40 glusterbot allgood: gusterbot's karma is now 1
00:40 JoeJulian heh
00:40 allgood glusterbot++
00:40 glusterbot allgood: glusterbot's karma is now 5
00:40 allgood typo
00:41 allgood I will run guest xen images hosted on glusterfs... is there any recomendation that I must follow? (or, is it a bad idea?)
00:42 JoeJulian @virtual hosting
00:42 JoeJulian damn... should add that one.
00:44 allgood i've found some info telling that 3.4 have better support for xen images... think it is the heal protocol
00:45 JoeJulian Could be. I prefer qemu-kvm.
00:45 JoeJulian @learn vm image tuning as https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html-single/Configuring_Red_Hat_OpenStack_with_Red_Hat_Storage/index.html#idm73980832
00:45 glusterbot JoeJulian: The operation succeeded.
00:45 allgood i like paravirtualization
00:46 JoeJulian me too
00:47 JoeJulian Plus, when using kvm I can use libgfapi and avoid fuse.
00:47 allgood don't know about it..
00:47 gmcwhistler joined #gluster
00:48 allgood xen does use qemu when running HVM (not PV) guests.
00:48 allgood is there a lot of overhead on using fuse?
00:49 JoeJulian context switches
00:49 JoeJulian They can be expensive
00:49 JoeJulian Depends on your use case of course.
00:51 theron joined #gluster
00:51 allgood found this: http://goo.gl/F8LDp6
00:51 glusterbot Title: Xen project Mailing List (at goo.gl)
00:58 theron_ joined #gluster
00:58 vpshastry joined #gluster
01:02 allgood i am trying to figure if I can only use the base system as a guest image on glusterfs and make the guest connect to gluster and mount it's data directly.
01:03 JoeJulian That's what I have done for years.
01:04 JoeJulian I had separate volumes, one for vm images, another for mysql, another for home directories, one for web stuff... etc.
01:04 allgood as it uses fuse, I cant mount any boot time volume, need to check if fuse is available on debian's initrd
01:06 allgood looks like it is the best to do this way then... if  do not do this, one big image would have small changes and make the replication/heal more complex
01:07 JoeJulian See! That's what I keep telling people.
01:11 allgood is there any recomendation to avoid mounting root filesystem of a vm through NFS ? think that this way I do not need an image at all
01:13 cjanbanan joined #gluster
01:16 allgood JoeJulian: will go now... thank you for your help!
01:16 JoeJulian I've seen people do that, but most go back to images
01:16 allgood JoeJulian++
01:16 glusterbot allgood: JoeJulian's karma is now 6
01:17 JoeJulian :)
01:18 allgood left #gluster
01:20 vpshastry joined #gluster
01:24 gildub joined #gluster
01:49 ilbot3 joined #gluster
01:49 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:06 gmcwhist_ joined #gluster
02:06 RameshN joined #gluster
02:12 vpshastry joined #gluster
02:17 Edddgy joined #gluster
02:24 bala joined #gluster
02:31 harish joined #gluster
02:42 glusterbot New news from newglusterbugs: [Bug 1065632] glusterd: glusterd peer status failed with the connection failed error evenif glusterd is running <https://bugzilla.redhat.com/show_bug.cgi?id=1065632>
02:44 haomaiwa_ joined #gluster
03:00 haomaiw__ joined #gluster
03:01 vpshastry joined #gluster
03:02 harish joined #gluster
03:13 bharata-rao joined #gluster
03:14 cjanbanan joined #gluster
03:14 kshlm joined #gluster
03:17 Edddgy joined #gluster
03:21 vpshastry joined #gluster
03:42 hchiramm_ joined #gluster
03:47 nbalachandran joined #gluster
03:47 shubhendu|lunch joined #gluster
03:49 RameshN joined #gluster
03:50 itisravi joined #gluster
03:55 nbalachandran joined #gluster
04:05 dusmant joined #gluster
04:11 nishanth joined #gluster
04:13 kumar joined #gluster
04:18 Edddgy joined #gluster
04:24 gmcwhistler joined #gluster
04:27 RioS2 joined #gluster
04:27 RioS2 joined #gluster
04:28 sjm left #gluster
04:35 raghu joined #gluster
04:35 atinmu joined #gluster
04:35 deepakcs joined #gluster
04:39 anoopcs joined #gluster
04:46 ppai joined #gluster
04:47 Rafi_kc joined #gluster
04:48 bala joined #gluster
04:52 ramteid joined #gluster
04:53 hchiramm_ joined #gluster
04:55 spandit joined #gluster
04:59 ndarshan joined #gluster
05:01 kdhananjay joined #gluster
05:02 coredump joined #gluster
05:07 prasanth_ joined #gluster
05:13 cjanbanan joined #gluster
05:18 psharma joined #gluster
05:19 Edddgy joined #gluster
05:22 sahina joined #gluster
05:28 kanagaraj joined #gluster
05:33 davinder16 joined #gluster
05:38 dtrainor joined #gluster
05:38 jiffin joined #gluster
05:41 karnan joined #gluster
05:42 sman joined #gluster
05:43 sauce joined #gluster
05:44 jiku joined #gluster
05:50 7F1AARQRT joined #gluster
05:50 17SAAGN0S joined #gluster
05:58 XpineX joined #gluster
06:00 dtrainor joined #gluster
06:03 rastar joined #gluster
06:10 rgustafs joined #gluster
06:13 cjanbanan joined #gluster
06:19 Edddgy joined #gluster
06:25 bala joined #gluster
06:25 mbukatov joined #gluster
06:33 ghenry joined #gluster
06:33 ghenry joined #gluster
06:38 RameshN joined #gluster
06:38 gEEbusT joined #gluster
06:43 gEEbusT Hi all, I am having issues with Samba VFS + Gluster - 2 nodes with a brick on each (replica 2), if I shut down the second node I can still use the first if I already have a connection to the share but cannot establish any new connections. Is anybody able to help with this?
06:43 cjanbanan joined #gluster
06:50 saurabh joined #gluster
06:57 ekuric joined #gluster
06:58 ctria joined #gluster
07:06 sahina joined #gluster
07:07 eseyman joined #gluster
07:14 aravindavk joined #gluster
07:16 hagarth joined #gluster
07:16 21WAAP3VO joined #gluster
07:20 Edddgy joined #gluster
07:21 keytab joined #gluster
07:30 giannello joined #gluster
07:40 cjanbanan joined #gluster
07:40 liquidat joined #gluster
07:41 dusmant joined #gluster
07:41 andreask joined #gluster
07:44 lalatenduM joined #gluster
07:50 fsimonce joined #gluster
07:54 Philambdo joined #gluster
07:54 necrogami joined #gluster
08:14 cjanbanan joined #gluster
08:21 Edddgy joined #gluster
08:22 Thilam [17:41] <ndevos> Thilam: thanks for the bug! I just found bug 1118591, it sounds quite similar
08:22 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1118591 urgent, urgent, ---, vshastry, POST , core: all brick processes crash when quota is enabled
08:22 Thilam hello
08:22 glusterbot Thilam: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:22 Thilam I had a look to your answer
08:24 aravindavk joined #gluster
08:25 Thilam but on my case only one brick crashes at the same time
08:27 ccha2 is there a changelog about 3.4.3 to 3.4.4 ?
08:31 RameshN joined #gluster
08:41 Slashman joined #gluster
09:04 necrogami joined #gluster
09:04 Pupeno joined #gluster
09:06 Philambdo joined #gluster
09:11 dtrainor joined #gluster
09:13 cjanbanan joined #gluster
09:22 Edddgy joined #gluster
09:23 DV__ joined #gluster
09:24 masterzen joined #gluster
09:29 sahina joined #gluster
09:29 chirino joined #gluster
09:41 haomaiwa_ joined #gluster
09:44 cjanbanan joined #gluster
09:47 shubhendu joined #gluster
09:47 dusmant joined #gluster
09:47 necrogami joined #gluster
09:53 ira joined #gluster
10:02 ppai joined #gluster
10:05 jmarley joined #gluster
10:05 jmarley joined #gluster
10:20 andreask joined #gluster
10:20 rastar joined #gluster
10:22 Edddgy joined #gluster
10:23 glusterbot New news from resolvedglusterbugs: [Bug 1027174] Excessive logging of warning message "remote operation failed: No data available" in samba-vfs logfile <https://bugzilla.redhat.com/show_bug.cgi?id=1027174>
10:28 qdk joined #gluster
10:31 raghu left #gluster
10:33 Philambdo joined #gluster
10:44 glusterbot New news from newglusterbugs: [Bug 928781] hangs when mount a volume at own brick <https://bugzilla.redhat.com/show_bug.cgi?id=928781> || [Bug 1120136] Excessive logging of warning message "remote operation failed: No data available" in samba-vfs logfile <https://bugzilla.redhat.com/show_bug.cgi?id=1120136>
10:45 bene2 joined #gluster
10:45 MattAtL joined #gluster
10:47 giannello_ joined #gluster
10:54 Gugge joined #gluster
10:54 aravindavk joined #gluster
10:55 cjanbanan joined #gluster
10:58 deepakcs joined #gluster
11:07 hagarth joined #gluster
11:11 Pupeno_ joined #gluster
11:14 glusterbot New news from newglusterbugs: [Bug 974886] timestamps of brick1 and brick2 is not the same. <https://bugzilla.redhat.com/show_bug.cgi?id=974886> || [Bug 990220] Group permission with high GID Number (200090480) is not being honored by Gluster <https://bugzilla.redhat.com/show_bug.cgi?id=990220> || [Bug 1095179] Gluster volume inaccessible on all bricks after a glusterfsd segfault on one brick <https://bugzilla.redhat.com/show_bug.cgi?id=1095179>
11:15 gildub joined #gluster
11:16 prasanth_ joined #gluster
11:22 bala joined #gluster
11:23 Edddgy joined #gluster
11:31 rgustafs joined #gluster
11:33 nshaikh joined #gluster
11:34 harish joined #gluster
11:35 hybrid512 Hi
11:35 glusterbot hybrid512: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:35 hybrid512 is there a way to change mount options of a gluster mount point live ?
11:36 edward1 joined #gluster
11:36 hybrid512 let me explain : I mounted my gluster volume with --direct-io-mode=disable and I would like to remove that option but my mount is busy, I can't do unmount/mount on it, is there a way to change that option live ?
11:38 diegows joined #gluster
11:46 anoopcs joined #gluster
12:01 Slashman_ joined #gluster
12:07 ira joined #gluster
12:09 Humble joined #gluster
12:10 SpComb rm: cannot remove ‘.git/refs/tags’: Transport endpoint is not connected
12:10 SpComb rmdir is failing?
12:14 SpComb umount and re-mount the client glusterfs and I can rmdir again
12:15 itisravi_ joined #gluster
12:16 baoboa joined #gluster
12:18 ndevos SpComb: 'Transport endpoint is not connected' is often related to the glusterfs-fuse process not responding anymore, it may have crashed or hit an other issue like a network delay
12:18 Rafi_kc joined #gluster
12:18 ndevos hybrid512: no, I do not think that is possible, sorry
12:20 Rafi_kc joined #gluster
12:23 Rafi_kc joined #gluster
12:23 Rafi_kc joined #gluster
12:24 Edddgy joined #gluster
12:29 hchiramm joined #gluster
12:30 social purpleidea: ping, I'm looking on puppet-gluster and I see that you do some lvm stuff. Is there any particular reason not to use puppetlabs-lvm there?
12:31 SpComb I'm unable to find any explanation for what glusterfs --direct-io-mode=disable does; does the default effectively set O_DIRECT on all opened glusterfs files, i.e. disable the use of the kernel vfs caches?
12:31 SpComb implying that --direct-io-mode=disable does more caching on the client... does that mean more inconsistencies between different clients?
12:33 social man glusterfs - --direct-io-mode=BOOL Enable/Disable the direct-I/O mode in fuse module (the default is enable).
12:34 social man fuse than answers the question
12:34 social SpComb: ^^
12:35 SpComb does it affect metadata, i.e. dentry caches?
12:35 SpComb I guess not, "This option disables the use of page cache (file content cache) in the kernel for this filesystem."
12:39 ppai left #gluster
12:43 Slashman joined #gluster
12:44 sjm joined #gluster
12:45 cjanbanan joined #gluster
12:46 kanagaraj joined #gluster
12:49 julim joined #gluster
12:50 theron joined #gluster
12:58 SpComb I'm running `sudo gluster volume create admin replica 3 foohost{2..7}:/srv/nas/admin/brick0` and it's giving me just "volume create: admin: failed" without any additional output - what gives?
13:00 SpComb can't find any related logs
13:02 plarsen joined #gluster
13:08 edwardm61 joined #gluster
13:08 SpComb hmm... it creates an empty .../brick0 on 4/6 servers, but doesn't do it on foohost5 and foohost6
13:09 jobewan joined #gluster
13:10 sahina joined #gluster
13:10 japuzzo joined #gluster
13:11 shubhendu joined #gluster
13:13 chirino joined #gluster
13:14 SpComb restarting glusterd let foohost6 create the vol, but foohost5 still failed
13:14 diegows joined #gluster
13:14 SpComb starting `glusterd --debug` on foohost5 showed that it considered foohost3 as "not connected"
13:15 SpComb hmm... must be some kind of network issue..
13:19 SpComb yes, now that `gluster peer status` shows that the cluster is fully connected the `volume create` succeeds... weirdness
13:19 SpComb but not giving any kind of error message output at all is annoying :)
13:19 ndarshan joined #gluster
13:20 nishanth joined #gluster
13:20 dusmant joined #gluster
13:21 clutchk joined #gluster
13:22 clutchk1 joined #gluster
13:25 Edddgy joined #gluster
13:34 cjanbanan joined #gluster
13:43 sputnik1_ joined #gluster
13:43 diegows joined #gluster
13:48 cjanbanan joined #gluster
13:49 dberry joined #gluster
13:49 dberry joined #gluster
13:50 dberry I added two more servers to a replicated volume, do I need to run volume rebalance?
13:53 jmarley joined #gluster
13:53 jmarley joined #gluster
13:54 tdasilva joined #gluster
13:54 mortuar joined #gluster
13:57 aravindavk joined #gluster
13:59 andreask joined #gluster
13:59 davinder16 joined #gluster
14:06 sas_ joined #gluster
14:10 wushudoin joined #gluster
14:21 andreask joined #gluster
14:21 doo joined #gluster
14:22 kshlm joined #gluster
14:23 clutchk joined #gluster
14:25 rwheeler joined #gluster
14:25 Edddgy joined #gluster
14:30 mortuar joined #gluster
14:31 doo_ joined #gluster
14:34 gmcwhistler joined #gluster
14:37 nbalachandran joined #gluster
14:38 anoopcs joined #gluster
14:49 elico left #gluster
14:49 tqrst joined #gluster
15:02 Eco_ joined #gluster
15:04 jdarcy joined #gluster
15:04 skippy joined #gluster
15:05 skippy Hi!  I'm curious about design recommendations, and especially when to carve out multiple bricks for Gluster volumes; as opposed to bricks-on-LVM and extending the logical volumes.
15:06 skippy Anyone have any links or resources handy?  I've not had much search success.
15:10 simulx2 joined #gluster
15:15 _Bryan_ joined #gluster
15:15 jcsp joined #gluster
15:16 Georgyo left #gluster
15:16 skulker joined #gluster
15:17 semiosis skippy: say more about your intended use case.  there's no one-size fits all recommendation.
15:19 skulker trying to mount glusterfs. error: failed to fetch volume  (key:testvol) ver: 3.5.1.  I've matched client to server versions. any ideas?
15:19 tqrst Any potential hiccups going from 3.4.2 to 3.5.1 for a simple distributed-replicate volume? Seems like a fairly straightforward update judging by the release notes.
15:19 itisravi_ joined #gluster
15:19 skulker fstab:  gluster1:testvol /mnt/glusterfs glusterfs defaults,_netdev,backupvolfile-server=gluster2 0 0
15:20 skippy semiosis: are there published recommendations anywhere that discuss the recommended Gluster configs for dirrerent types of use?
15:21 semiosis skulker: please ,,(pasteinfo) and also include the complete mount command you are using.
15:21 glusterbot skulker: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
15:21 semiosis skippy: not yet
15:22 semiosis skulker: oh never mind the mount command i see your fstab line
15:24 skulker thanks semiosis I also added option rpc-auth-allow-insecure on to glusterd.vol  and volume set testvol server.allow-insecure on based on similar posts
15:24 semiosis skulker: still waiting on that ,,(pasteinfo)
15:24 glusterbot skulker: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
15:25 skippy semiosis: I'm looking to replace a NAS NFS appliance with a Gluster distributed replicate setup.  Mostly curious about when and why to add more bricks (on the same backend storage) versus just extending the existing bricks.
15:25 dblack joined #gluster
15:26 semiosis skippy: personally, i try to avoid adding bricks, because rebalancing the data is expensive.  so what I consider is performance vs. capacity
15:26 semiosis skippy: you can have many bricks per server, say 10 bricks replicated between 2 servers
15:26 semiosis to start
15:26 Edddgy joined #gluster
15:26 semiosis maybe they're 100GB each, giving you 1TB total usable space
15:27 semiosis if you need more capacity, you grow the bricks with LVM or whatever
15:27 semiosis if you need more performance, you move the bricks onto more servers, say move 5 bricks (from each replica) to a new server
15:28 semiosis now you have twice the performance
15:28 semiosis that's how I do it
15:28 ekuric left #gluster
15:30 tom[] joined #gluster
15:33 skippy thanks, semiosis
15:34 stickyboy semiosis: Nice break down.  What about DHT overhead in more bricks?
15:34 stickyboy Particularly with interactive performance (home dirs on 10GbE)?
15:36 semiosis stickyboy: that depends on the details of your application/use case
15:37 semiosis actually, i'm not even sure what you mean by DHT overhead
15:37 stickyboy semiosis: ls is slow. :)
15:37 stickyboy On 10GbE
15:38 semiosis ah well, that's just a fact of life
15:38 semiosis does having more bricks make it slower?
15:38 semiosis maybe a little bit
15:39 stickyboy semiosis: I guess as long as the replica count is the same it shouldn't.
15:40 stickyboy semiosis: But I'm seriously considering moving to Infiniband.  Kinda hoping to bounce ideas off people about hardware etc.
15:40 stickyboy JoeJulian said for interativity Infinband is a must.
15:42 jiffin joined #gluster
15:43 eryc joined #gluster
15:43 andreask joined #gluster
15:43 andreask joined #gluster
15:43 eryc joined #gluster
15:43 vimal joined #gluster
15:43 stickyboy Last week I identified some issues with my XFS stripe unit, but that only improved streaming I/O.
15:44 cjanbanan joined #gluster
15:45 anoopcs joined #gluster
15:45 LebedevRI joined #gluster
15:47 anoopcs joined #gluster
15:55 coredump joined #gluster
15:56 StarBeast joined #gluster
15:58 andreask joined #gluster
15:58 andreask joined #gluster
16:02 Edddgy joined #gluster
16:04 dtrainor joined #gluster
16:07 Rafi_kc joined #gluster
16:07 Mo__ joined #gluster
16:08 Pupeno joined #gluster
16:18 hagarth joined #gluster
16:27 sputnik1_ joined #gluster
16:27 sputnik1_ joined #gluster
16:28 sputnik1_ joined #gluster
16:28 StarBeast joined #gluster
16:29 bala joined #gluster
16:35 rjoseph joined #gluster
16:56 Peter3 joined #gluster
16:56 Peter3 any tunings to speed up rpc lookup on gluster?
16:57 daMaestro joined #gluster
17:01 JoeJulian Don't use replication?
17:01 bala joined #gluster
17:03 Peter3 but i have to….
17:04 JoeJulian You /can/ turn off the self-heal checks at the client. Then you just get whatever you get.
17:04 Peter3 i have to do a replica 2 but also need to work on the rsync small files
17:05 Peter3 i m using NFS
17:05 JoeJulian Are you using --inplace with rsync?
17:05 JoeJulian Oh, well then you're SOL.
17:05 troj joined #gluster
17:06 Peter3 :(
17:06 Peter3 i can look into inplace
17:06 Peter3 what does it do?
17:08 rotbeard joined #gluster
17:08 JoeJulian rsync copies a file to a temp filename then renames it. That means the filename hashes out to one dht subvol, then is renamed. That may (probably does) cause the filename to hash to a different brick. That'll then create a bunch of dht-linkto entries causing a lookup to go to the wrong brick first, then to the right brick. That can be mitigated with a rebalance, but to avoid that in the first place, use --inplace to prevent rsync from creating
17:08 JoeJulian tempfile names.
17:09 JoeJulian Wow... I think my espresso kicked in.
17:09 Peter3 haha nice! thx!
17:12 _dist joined #gluster
17:21 anoopcs joined #gluster
17:30 StarBeast joined #gluster
17:41 purpleidea social: i have never seen puppetlabs-lvm
17:46 clutchk joined #gluster
17:50 jiffin1 joined #gluster
17:50 anoopcs1 joined #gluster
17:54 Matthaeus joined #gluster
17:54 JoeJulian purpleidea: Have you had any issue with VMs under vagrant not being able to do dns lookups with libvirt?
17:57 Rafi_kc joined #gluster
17:58 purpleidea JoeJulian: assuming you mean vagrant-libvirt, then i do not know because i use /etc/hosts for DNS. i currently did this manually by hacking /etc/hosts, but i'm soon going to switch to a vagrant module to do this.
17:58 JoeJulian Do your VMs not download anything? gluster packages maybe?
18:00 tdasilva_ joined #gluster
18:02 giannello joined #gluster
18:03 systemonkey joined #gluster
18:14 cjanbanan joined #gluster
18:15 purpleidea JoeJulian: oh that sort of DNS. no it works fine. what does your /etc/resolv.conf say? are you using libvirt, and what type of networking on the HOST? nat? bridged?
18:17 anoopcs joined #gluster
18:20 JoeJulian purpleidea: resolv points to the local dnsmasq. networking is nat. I can see the masqueraded DNS requests on em1 on the fedora host, but no replies. Just wondering if it was something you knew about.
18:21 purpleidea JoeJulian: unfortunately/fortunately i have no had this issue. if i can log into your box, i'm happy to hack on it for a few minutes though
18:23 semiosis JoeJulian: iptables
18:23 semiosis ;)
18:24 JoeJulian hehe, I checked that and selinux already. :D
18:26 purpleidea JoeJulian: my guess is your libvirt networking isn't setup correctly...
18:26 purpleidea JoeJulian: can you ping 8.8.8.8 from inside the vm ?
18:26 JoeJulian yep
18:26 JoeJulian Just can't get dns responses.
18:26 purpleidea JoeJulian: and what does /etc/resolv.conf say
18:26 purpleidea (inside the vm?)
18:26 JoeJulian I'm not a newb... ;)
18:28 chirino joined #gluster
18:28 purpleidea JoeJulian: i didn't think so. i just don't know what else could be wrong, sorry
18:28 purpleidea JoeJulian: check firewalld...
18:28 JoeJulian Hehe, that's ok. I can figure it out. Just figured it would save time if you'd already encountered it.
18:30 purpleidea sorry! let me know what the issue is. if it's vagrant-libvirt, or libvirt i could patch or get it patched
18:31 JoeJulian I've already got a couple of things I'll need to submit patches for.
18:31 StarBeast joined #gluster
18:34 purpleidea JoeJulian: sweet! point me to early ideas or patches
18:34 JoeJulian They're just one-liners. nothing exciting.
18:35 Matthaeus1 joined #gluster
18:36 social joined #gluster
18:39 dtrainor joined #gluster
18:40 dtrainor joined #gluster
18:44 pdrakeweb joined #gluster
18:44 MattAtL left #gluster
18:52 ekuric joined #gluster
19:02 tdasilva joined #gluster
19:06 ndk joined #gluster
19:10 theron joined #gluster
19:14 cjanbanan joined #gluster
19:19 Edddgy joined #gluster
19:31 dberry If I add 2 more servers to a 2 server replicant, will it auto heal and copy the files to the new servers?
19:33 JoeJulian dberry: There's no healing nor copying necessary. You add two new servers and rebalance.
19:35 LebedevRI joined #gluster
19:35 dberry so do I have to run the rebalance command?
19:36 _dist dberry: if you're adding more bricks to a replicate volume by increasing the replica count, it'll "just work"
19:36 JoeJulian But who would do THAT?
19:36 JoeJulian :P
19:36 JoeJulian Are you trying to make a 99.999999999% uptime system?
19:37 _dist ^ 99.99999999999 is still better
19:37 dberry I am attempting to move gluster from 2 servers to bigger, badder servers without downtime
19:37 JoeJulian replica > 3 is ridiculous.
19:38 JoeJulian UNLESS you're someone like netflix who needs 1000 copies of the same file so it can be streamed from more than 3 servers.
19:38 _dist we currently use 3, 2 on-site and 1 off-site I have to agree that it creates a ton of potential performance issues
19:39 lmickh joined #gluster
19:39 _dist dberry: btw if our file share example, I actually made a mistake that was very annoying. A fuse client didn't have a dns entry for the new brick, it caused annoying issues.
19:40 _dist so I know it seems obvious, but before adding the brick make sure all your clients will have access to it as well
19:40 dberry the wonders of chef-everyone gets the same host file/dns
19:40 JoeJulian That's what just took my volume expansion an extra week. I couldn't get the network guys to get the hostnames updated.
19:41 JoeJulian And chef sucks. I can say that unequivocally now that I'm using it in production.
19:41 StarBeast joined #gluster
19:42 JoeJulian I should probably write that up in a blog post, but I'm afraid it would end up an entire book.
19:42 skippy puppet++
19:42 glusterbot skippy: puppet's karma is now 1
19:42 * skippy ducks
19:42 JoeJulian puppet++
19:42 glusterbot JoeJulian: puppet's karma is now 2
19:43 JoeJulian I'm starting to learn salt because we're switching to that from chef. It's not bad so far.
19:43 JoeJulian Parts of it I like better than puppet.
19:43 skippy jump on Ansible, too!  Learn 'em all!
19:43 JoeJulian lol
19:43 JoeJulian CF Engine
19:43 skippy Cfengine2
19:44 skippy that 2 is important
19:44 dberry yeah, too many choices, I was fine w/ssh and bash but the new hotness came along
19:44 skippy ssh+bash == Ansible, basically.
19:44 skippy Puppet takes some learning, but I cant image sysadmin without it now.
19:49 SpComb ansible is excellent for running puppet :)
20:01 sjm left #gluster
20:02 semiosis JoeJulian: netflix uses front-end caching through CDNs for the streaming
20:05 JoeJulian I know that, I'm just making a point about the futility of having excessive replicas.
20:05 semiosis oh
20:08 SpComb replicate all the things
20:09 StarBeast Hi. Does any one know if libgfapi supported by CentOS 6.5? If yes, where I can find documentation how to use it?
20:09 Matthaeus joined #gluster
20:14 cjanbanan joined #gluster
20:15 _dist StarBeast: Things might be different now, but 4-5 months ago when I decided to use libgfapi it was very difficult to find debs or rpms where qemu was compiled with libgfapi capability
20:18 daMaestro joined #gluster
20:19 StarBeast _dist: Hi. I am trying to move my stuff to libgfapi. Do you know some good docs for it? I also a bit confused about centos qemu versioning model. qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64 not use which version of qemu I have.
20:23 _dist you can get your qemu version by typing qemu-system-x96_64 --version
20:23 _dist x96/x86*
20:23 Edddgy joined #gluster
20:24 _dist but the version won't really tell you what compile options were used
20:26 coredump joined #gluster
20:29 StarBeast /usr/libexec/qemu-kvm --version
20:29 StarBeast QEMU PC emulator version 0.12.1 (qemu-kvm-0.12.1.2), Copyright (c) 2003-2008 Fabrice Bellard
20:29 StarBeast there is no qemu-system-x86_64 for centos.
20:32 ndk` joined #gluster
20:35 dtrainor joined #gluster
20:36 _dist well, I suspect while it's base version may be 0.12.1 it'll have a lot of backported features. I'm assuming the reason you asked about the version is that you're after libgfapi?
20:40 StarBeast _dist: Yes. I think it has libgfapi support ported. but when I trying to create image using gluster:// enpoint it says Formatting 'gluster://blabla and hungs :(
20:41 Matthaeus joined #gluster
20:41 skippy left #gluster
20:42 StarBeast _dist: does it require any firewall config apart from 49152 port open?
20:45 semiosis 24007
20:45 semiosis ,,(ports)
20:45 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:46 _dist StarBeast: so your qemu-img create fails if set to a gluster destination?
20:46 StarBeast _dist: No it is just hungs
20:48 Pupeno_ joined #gluster
20:48 _dist StarBeast: I got libgfapi working in CentOS 6.5 & debian, but it wasn't easy to setup or maintain. I was working with semiosis to get some debs together but then I found proxmox and lost interest
20:49 StarBeast semiosis: It looks I have this port open for all my servers
20:50 StarBeast _dist: what was the challenges you had for centos?
20:51 _dist StarBeast: custom compile of qemu was required at the time. Also, there was no good way to manage images stored on glusterfs (there might be now, but I doubt it)
20:51 daMaestro joined #gluster
20:53 StarBeast _dist: sounds not so production ready though. is there any way to keep using fuse for vm's?
20:55 StarBeast _dist: I had fuse working for gluster 3.3, and it is working for 3.5 but always healing the files.
20:55 _dist StarBeast: I have the same "always healing files" even through libgfapi on 3.4.2, I'm working with JoeJulian to solve it, hopefully within a couple of weeks
20:56 StarBeast _dist: which makes vm filesystem not so happy under high load.
20:56 JoeJulian _dist: What's the bug id for that again?
20:57 _dist JoeJulian: as crazy as it sounds I don't think I filed a bug, just one about the xml output not working on the report, let me see if I can find one that's already out there
20:57 JoeJulian I could have sworn there was one...
20:57 StarBeast _dist: Did not file the bug yet, as I believe that something wrong with my config though
20:57 JoeJulian StarBeast: What makes you think it's always healing?
20:58 * _dist can't find a bug for this, just mail posts etc
20:59 StarBeast JoeJulian: Every time I run gluster volume heal <volname> info healed, I see new files in list. It is basically healing every 10 min
20:59 _dist What I find is any VM with significant IO will just be in the list until the IO stops
21:00 _dist (for many machines that's only if they are off)
21:00 JoeJulian Yeah, that's fixed in 3.5.1
21:00 JoeJulian It's not actually healing.
21:01 JoeJulian It's an artifact of the method is uses for keeping track of pending writes.
21:01 cyberbootje joined #gluster
21:02 _dist don't get my hopes up :) how can I verify that's the specific problem I'm seeing?
21:02 JoeJulian _dist: I'm not sure that's your same problem. That's why I was asking for the bug. I wanted to review what your problem was.
21:03 JoeJulian Without making you repeat it again. :D
21:03 _dist my problem is that if a VM is on, it's in the heal list. If never gets "healed" as far as the "info healed" is concerned. The bricks all have the same filesize and date (to the second)
21:03 StarBeast JoeJulian: I updated to 3.5.1 two days ago. It stops showing bunch of files when you do 'gluster v heal <volname> info', but it shows bunch of files in 'gluster v heal <volname> info healed'
21:04 _dist and real heals take an insane amount of time, like a week for 800G of data
21:06 _dist but if I can verify the issue is cosmetic I'd feel comfortable adding my next brick and moving to our DR site. Also, it'd be super nice if there was a way to check a real "unhealthy" list, which I currently can't get
21:06 JoeJulian _dist: I completely agree.
21:07 _dist was this the bug? https://bugzilla.redhat.com/show_bug.cgi?id=1039544 (part of 3.5.1 release notes)
21:07 glusterbot Bug 1039544: medium, medium, ---, pkarampu, CLOSED CURRENTRELEASE, [FEAT] "gluster volume heal info" should list the entries that actually required to be healed.
21:07 stickyboy I stopped using `gluster volume heal <volnam> info` and started using `gluster volume heal <volname> statistics`
21:08 StarBeast JoeJulian: Actually I got strange situation. I had 4 bricks cluster with 3.5.0 and I decided to add two more bricks, they have been added with 3.5.1 as new version released. I realised it after a while. Updated the rest to 3.5.1 and rebooted servers.
21:08 JoeJulian One thing you can look at. If you get the ,,(extended attributes) on the file in question, it should have the same (generally no greater than 1) value for all trusted.afr entries.
21:08 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
21:09 _dist looks like 3.4.2 doesn't have "statistics"
21:09 stickyboy I started a full heal a few days ago and now I see it's healed 8 million files.  Yay? :)
21:10 stickyboy _dist: Ah.  Yah, 3.5.0 here.
21:11 _dist JoeJulian: what would be a good way for me to verify (other than the fact that everything is working fine) that my issue is just bad heal reporting?
21:11 _dist also, is anyone else here using libgfapi in prod?
21:12 JoeJulian _dist: One thing you can look at. If you get the ,,(extended attributes) on the file in question, it should have the same (generally no greater than 1) value for all trusted.afr entries.
21:12 glusterbot _dist: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
21:12 diegows joined #gluster
21:13 StarBeast JoeJulian: Thanks I will check it now
21:13 dtrainor joined #gluster
21:15 _dist StarBeast: I'd like to beleivve you don't have that same issue, since I'm hoping it's fixed in 3.5.1, I'll have to build a test env after all to see if swapping versions makes it go away/come back
21:16 StarBeast _dist: going to have a look on xattrs now, I might break something myself :)
21:19 Pupeno joined #gluster
21:20 JoeJulian lol
21:22 JoeJulian purpleidea: Hehe, you'll find this funny I think. The reason I couldn't do any dns lookups is because I configured my router to block all udp port 53 traffic to anyplace other than my own dns server in the house. I did that a long time ago to help my son avoid those sites that he didn't really want to go to in the first place.
21:23 stickyboy Hah
21:24 semiosis JoeJulian: so what you're saying is... iptables?
21:24 JoeJulian access-list, but yeah
21:24 semiosis hah
21:27 sickness joined #gluster
21:27 sickness hi all
21:34 Pupeno_ joined #gluster
21:36 StarBeast trusted.afr.infdev-vol00-client-4=0x000000000000000000000000
21:36 StarBeast trusted.afr.infdev-vol00-client-5=0x000000000000000000000000
21:36 JoeJulian That's a "clean" state.
21:37 JoeJulian There are no pending writes from that server destined for the other.
21:37 StarBeast JoeJulian: Let me check some other files in this list
21:38 plarsen joined #gluster
21:39 JoeJulian No. I won't let you. I'm revoking your keyboard rights.
21:42 StarBeast JoeJulian: So. they all look the same now. Going to start perf test
21:43 stickyboy JoeJulian: The compression/decompression docs mention issues with write-behind.  Is that the only option I should be careful of, or other performance options?
21:44 StarBeast JoeJulian: What should I look for? If they are really healing what should be the next debugging step?
21:44 cjanbanan joined #gluster
21:44 Pupeno joined #gluster
21:45 JoeJulian stickyboy: Haven't looked at the code for compression so I have no idea. I would start with what the docs say. If it doesn't work, that's when I usually start to figure out what they didn't think of.
21:45 JoeJulian StarBeast: Going to the pub for a pint in celebration.
21:46 JoeJulian but seriously, make sure all your clients are connected to all the servers for that volume.
21:46 StarBeast JoeJulian: Have a good time :)
21:46 StarBeast JoeJulian: thanks
21:46 JoeJulian If one is not, that could be writing to only one server and it would need healed over and over again.
21:47 _dist JoeJulian: I'm heading out, I probably won't have serious time to try the xattr compare, or 3.5.1 test env until next week. But who knows I might squeese it in somewhere
21:47 JoeJulian ... and I wasn't saying that *I* was going, I was saying that's the next step in debugging a system that's working perfectly.
21:47 stickyboy JoeJulian: Re: compression, cool.  I think I'll stick to the defaults for now until my backend settles down its heal, then try to establish a baseline.
21:48 StarBeast JoeJulian: :)
21:51 gmcwhist_ joined #gluster
22:02 edong23 joined #gluster
22:18 diegows joined #gluster
22:28 mbukatov joined #gluster
22:34 dtrainor joined #gluster
22:44 cjanbanan joined #gluster
22:48 LessSeen_ joined #gluster
22:51 jobewan joined #gluster
22:51 StarBeast trusted.afr.infdev-vol00-client-1=0x000000010000000000000000
22:51 StarBeast trusted.afr.infdev-vol00-client-0=0x000000010000000000000000
22:51 StarBeast sometimes I see these attributes:
22:52 StarBeast but they disappearing quite fast
22:53 gildub joined #gluster
22:55 Paul-C joined #gluster
23:13 jobewan joined #gluster
23:28 ira_ joined #gluster
23:32 sputnik1_ joined #gluster
23:35 Matthaeus joined #gluster
23:36 cyberbootje joined #gluster
23:39 doo joined #gluster
23:52 diegows joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary