Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 jmarley joined #gluster
00:18 MugginsM joined #gluster
00:37 MacWinner joined #gluster
00:38 Gill joined #gluster
00:49 jaank joined #gluster
01:03 nangthang joined #gluster
01:20 T3 joined #gluster
01:28 wkf joined #gluster
01:47 gem joined #gluster
02:15 jmarley joined #gluster
02:15 harish joined #gluster
02:21 harish joined #gluster
02:33 suman_d_ joined #gluster
02:51 jaank joined #gluster
03:27 rjoseph|afk joined #gluster
03:30 overclk joined #gluster
03:37 kshlm joined #gluster
03:41 itisravi joined #gluster
03:43 hagarth joined #gluster
03:59 bharata-rao joined #gluster
04:01 prasanth_ joined #gluster
04:03 rodrigoc joined #gluster
04:08 T3 joined #gluster
04:08 gem joined #gluster
04:13 rjoseph|afk joined #gluster
04:15 meghanam joined #gluster
04:18 overclk joined #gluster
04:20 rodrigoc joined #gluster
04:21 atalur joined #gluster
04:23 T3 joined #gluster
04:24 sdebnath__ joined #gluster
04:24 rcampbel3 joined #gluster
04:25 rodrigoc joined #gluster
04:32 spandit joined #gluster
04:32 rafi joined #gluster
04:33 rodrigoc joined #gluster
04:33 T3 joined #gluster
04:40 rodrigoc joined #gluster
04:45 mikedep333 joined #gluster
04:47 rodrigoc joined #gluster
04:50 sakshi joined #gluster
04:53 suman_d_ joined #gluster
04:54 rodrigoc joined #gluster
04:59 jiffin joined #gluster
04:59 ppai joined #gluster
04:59 soumya joined #gluster
05:00 ndarshan joined #gluster
05:01 anoopcs joined #gluster
05:04 Manikandan joined #gluster
05:04 Manikandan_ joined #gluster
05:05 anrao joined #gluster
05:21 mikedep333 joined #gluster
05:22 schandra joined #gluster
05:27 kdhananjay joined #gluster
05:53 kdhananjay joined #gluster
05:53 ramteid joined #gluster
05:54 shylesh__ joined #gluster
05:56 itpings hi guys
05:56 itpings i need some help
05:56 itpings i cannot automount the shares
05:57 itpings if i manually mount gluster works fine
05:57 itpings i am trying the replication of 2 servers
05:57 JoeJulian selinux?
05:57 itpings disabled
05:57 itpings nfs-server disabled
05:58 itpings firewall disabled
05:58 JoeJulian fpaste.org your automount config
05:58 semiosis itpings: what version of gluster?  what distro/version?
05:58 itpings i have added in /etc/fstab only
05:58 itpings 3.6.2
05:58 JoeJulian Oh, you said automount
05:58 semiosis JoeJulian: o/
05:58 anil joined #gluster
05:58 JoeJulian What're you doin' up at this hour? :D
05:59 schandra joined #gluster
05:59 itpings yeah but its should come automatically up if i add in /etc/fstab
05:59 itpings i am using centos 7 min intall
05:59 JoeJulian Saw that earlier today.
05:59 JoeJulian Do you have _netdev as a mount option?
05:59 * semiosis crashes
05:59 itpings yes
05:59 JoeJulian G'night semiosis
05:59 itpings here is my fstab
05:59 JoeJulian itpings: great...
06:00 itpings 192.168.5.87:/gv0       /mnt/gluster     glusterfs defaults, _netdev    0 0
06:00 JoeJulian I don't have an answer for you tonight. Someone else had the same problem earlier today. I suspect glusterd is just taking too long to listen.
06:01 itpings also gluster start fail when i reboot
06:01 JoeJulian (that space isn't there between the comma and _netdev is it?)
06:01 itpings no space
06:01 itpings ah
06:01 itpings ok removed the space
06:01 itpings and now rebooting
06:02 itpings will update you in a min
06:03 itpings yea haaa
06:03 itpings cool
06:03 itpings its working
06:04 itpings the problem was space
06:04 itpings thanks a lot Joe
06:04 raghu` joined #gluster
06:07 JoeJulian Excellent, glad I could help. :D
06:08 itpings now rebooting both servers and checking
06:08 itpings will update
06:08 itpings in a min
06:09 shubhendu joined #gluster
06:16 itpings no
06:16 itpings not coming up
06:16 itpings i think some problem with config again
06:17 hagarth joined #gluster
06:18 itpings rebooting again
06:18 itpings and testing
06:19 atalur joined #gluster
06:24 nshaikh joined #gluster
06:30 itpings no doesnt comes up
06:37 stickyboy I was replacing a brick on a replica 2 volume yesterday and all FUSE clients which had that volume mounted lost their connections and got "Transport endpoint not connected." Is that normal?
06:38 JoeJulian depends which version you're running. I found that bug a while back in 3.4.2
06:41 ppai joined #gluster
06:42 nshaikh joined #gluster
06:48 atalur joined #gluster
06:49 rwheeler joined #gluster
06:50 nangthang joined #gluster
06:51 bene2 joined #gluster
06:55 stickyboy JoeJulian: Eek. We're on 3.5.3.
06:56 JoeJulian should have been fixed by there.
06:56 JoeJulian At least the bug I found.
06:57 stickyboy JoeJulian: Ok, that's a shame.
06:58 stickyboy I wonder if it's worth filing a bug. I wouldn't know which component to file it on (backend? frontend?).
06:59 JoeJulian Sounds like a fuse client bug. Might even be a crash report in the client log.
07:03 suman_d joined #gluster
07:06 JoeJulian ls
07:11 itpings hi guys
07:11 itpings same thing
07:11 itpings its not automatically mounting the share
07:11 itpings manual working fine
07:12 itpings if i reboot both machines
07:12 JoeJulian For tonight, don't reboot both machines.
07:13 itpings lol
07:13 JoeJulian I'll try to take a look at it tomorrow.
07:13 itpings ok ty
07:14 kovshenin joined #gluster
07:14 itpings ok here is the funny thing
07:14 itpings if only one machine goes down ...everything comes up fine
07:14 itpings mean share works
07:14 JoeJulian Makes some degree of sense.
07:15 JoeJulian You could look at the client log and see why.
07:15 itpings but if i reboot both i had to manually do the sharing
07:15 JoeJulian s/sharing/mounting/
07:15 LordFolken JoeJulian: hey strange question, I have 3 bricks in a disperse volume, brick one and two both have 875gig of data, but brick 3 only has 780gig
07:16 LordFolken I've done - find /mnt -d -exec getfattr -h -n trusted.ec.heal {} \;
07:16 JoeJulian I haven't played with the disperse translator yet.
07:17 LordFolken I'm running 3.6.2
07:17 itpings Also how can i help gluster community ?
07:17 LordFolken it looks like it has awesome potential, but like, how do you know the volume is consistent
07:18 nshaikh joined #gluster
07:18 JoeJulian One of the most useful things to me is just hanging out here and helping figure out troubles. Not only does that help the community, but it give you a much deeper understanding of the subject and increases your value throughout the industry.
07:18 nangthang joined #gluster
07:19 itpings sure will do that
07:19 itpings also i will make some videos on howto ..but first need to fix my own lol
07:19 stickyboy itpings: Write blog posts!
07:19 stickyboy itpings: File bugs!
07:19 JoeJulian That would be great. Apparently a lot of people also appreciate blogging. I've had a lot of positive feedback from mine.
07:19 itpings yea i do tht stickyboy
07:20 stickyboy itpings: Read / respond on the mailing list. :)
07:20 JoeJulian Meh, screw the mailing list. ;)
07:20 itpings i have itpingsdotcom
07:20 itpings i do blogging there
07:20 stickyboy JoeJulian: True. :)
07:20 RameshN joined #gluster
07:21 JoeJulian Mine, in case you haven't found it by some weird inability to google, is http://joejulian.name
07:21 itpings and now a days running lzh project (linux zero to hero)
07:21 itpings nice to meet you guys
07:21 stickyboy Mine is: https://mjanja.ch/ :D
07:21 itpings ok i will bookmark them
07:23 itpings joe your site not working
07:23 itpings mjanja.ch working fine
07:23 jtux joined #gluster
07:23 anil joined #gluster
07:23 JoeJulian Looks ok from here.
07:24 itpings opening and closing
07:24 itpings i think i have some dns thingy
07:24 JoeJulian Aack! How long has ipv4 been broken?!?!
07:24 JoeJulian Oh, nevermind.
07:25 JoeJulian duh
07:25 JoeJulian I can't go to the ip address, I have to go to the hostname.
07:25 stickyboy JoeJulian: Crap. :D
07:25 JoeJulian So, anyway, nope. It's working fine via ipv4 or 6.
07:26 JoeJulian You'll wanna get that fixed, itpings, at least 50% of anything gluster related that you search will point to my blog.
07:26 itpings i think dotname is the issue
07:26 itpings ah lol
07:26 JoeJulian It's a valid TLD.
07:27 itpings yeah i know that
07:27 JoeJulian Maybe your root hints are 10 years old. ;D
07:27 itpings yeah thats true
07:27 itpings anyway will get it fixed
07:27 itpings i wonder its me or linux
07:27 itpings lol
07:28 JoeJulian hehe
07:28 JoeJulian ... I need to do a bunch of updates to my blog...
07:29 suman_d_ joined #gluster
07:29 itpings any other domain o
07:29 itpings i mean any other domain pointing to http://joejulian.name/ ?
07:30 JoeJulian Nope
07:30 JoeJulian That's my "brand".
07:30 itpings joejulia.com would be great
07:30 itpings lol
07:30 itpings anyway brb guys
07:31 stickyboy JoeJulian: brand. :P
07:31 JoeJulian hehe
07:31 JoeJulian I am what I am.
07:31 stickyboy JoeJulian: Mine's mjanja. Means "hustler" in Swahili. :)
07:31 JoeJulian Unfortunately, Popeye already had that byline.
07:34 bene_in_BLR joined #gluster
07:41 LordFolken just FYI - find $1 -type f -print0 | xargs -0 stat --format '%Y :%y %n'
07:41 tanuck joined #gluster
07:41 LordFolken appears to have fixed my brick consistency issue
07:41 JoeJulian That's old-school.
07:42 LordFolken my gigabit network is maxed out but it's rebuilding the missing items from the brick
07:52 sdebnath__ joined #gluster
07:55 lalatenduM joined #gluster
07:57 Manikandan joined #gluster
07:59 ppai joined #gluster
07:59 shubhendu joined #gluster
08:02 bala joined #gluster
08:06 mbukatov joined #gluster
08:10 kovshenin joined #gluster
08:10 eychenz joined #gluster
08:11 itpings back
08:11 itpings you guys still there ?
08:11 glusterbot News from newglusterbugs: [Bug 1188184] Tracker bug :  NFS-Ganesha new features support for  3.7. <https://bugzilla.redhat.com/show_bug.cgi?id=1188184>
08:11 prasanth_ joined #gluster
08:11 [Enrico] joined #gluster
08:23 eychenz joined #gluster
08:29 shubhendu joined #gluster
08:31 DV joined #gluster
08:35 LebedevRI joined #gluster
08:36 soumya_ joined #gluster
08:37 fsimonce joined #gluster
08:40 rjoseph|afk joined #gluster
08:45 dusmant joined #gluster
08:53 rjoseph|afk joined #gluster
08:55 meghanam joined #gluster
09:02 tanuck joined #gluster
09:02 soumya_ joined #gluster
09:08 nangthang joined #gluster
09:10 IvanRossi joined #gluster
09:11 ppai joined #gluster
09:11 glusterbot News from newglusterbugs: [Bug 1191006] Building argp-standalone breaks nightly builds on Fedora Rawhide <https://bugzilla.redhat.com/show_bug.cgi?id=1191006>
09:12 suman_d_ joined #gluster
09:12 TvL2386 joined #gluster
09:13 IvanRossi left #gluster
09:15 rouge2507 joined #gluster
09:17 harish joined #gluster
09:17 atalur joined #gluster
09:19 overclk joined #gluster
09:20 schandra joined #gluster
09:29 rjoseph|afk joined #gluster
09:32 al joined #gluster
09:42 nangthang joined #gluster
09:47 gildub joined #gluster
09:52 sdebnath__ joined #gluster
09:55 jbrooks joined #gluster
09:55 ralala joined #gluster
10:02 ricky-ticky joined #gluster
10:03 Manikandan joined #gluster
10:04 T0aD joined #gluster
10:06 mbukatov joined #gluster
10:11 jmarley joined #gluster
10:21 bjornar joined #gluster
10:22 itpings just completed my howto
10:22 itpings with gluster and urbackup
10:26 anrao joined #gluster
10:30 xavih LordFolken: which version of gluster are you using ? the first command you tried should have worked
10:31 xavih LordFolken: is there anything in the logs  when you executed the first command ?
10:32 hagarth joined #gluster
10:37 suman_d_ joined #gluster
10:43 nangthang joined #gluster
10:49 shubhendu joined #gluster
11:00 mbukatov joined #gluster
11:11 T3 joined #gluster
11:14 ira joined #gluster
11:15 Slashman joined #gluster
11:15 anrao joined #gluster
11:17 kkeithley1 joined #gluster
11:18 ppai joined #gluster
11:20 swebb joined #gluster
11:26 diegows joined #gluster
11:28 mbukatov joined #gluster
11:32 mbukatov joined #gluster
11:51 ndevos REMINDER: Gluster Bug Triage meeting starting in 10 minutes in #gluster-meeting
12:01 ndevos REMINDER: Gluster Bug Triage meeting starting *now* in #gluster-meeting
12:02 mbukatov joined #gluster
12:04 itisravi_ joined #gluster
12:08 nbalacha joined #gluster
12:12 anrao joined #gluster
12:18 shubhendu joined #gluster
12:20 rjoseph|afk joined #gluster
12:22 sdebnath__ joined #gluster
12:30 rafi joined #gluster
12:32 hagarth joined #gluster
12:33 ricky-ticky1 joined #gluster
12:37 bjornar joined #gluster
12:40 suman_d_ joined #gluster
12:42 glusterbot News from newglusterbugs: [Bug 1138229] Disconnections from glusterfs through libgfapi <https://bugzilla.redhat.com/show_bug.cgi?id=1138229>
12:42 glusterbot News from newglusterbugs: [Bug 1187347] RPC ping does not retransmit <https://bugzilla.redhat.com/show_bug.cgi?id=1187347>
12:42 glusterbot News from newglusterbugs: [Bug 1188886] volume extended attributes remain on failed volume creation <https://bugzilla.redhat.com/show_bug.cgi?id=1188886>
12:42 glusterbot News from newglusterbugs: [Bug 1187372] Samba "use sendfile" is incompatible with GlusterFS libgfapi vfs_glusterfs. <https://bugzilla.redhat.com/show_bug.cgi?id=1187372>
12:42 glusterbot News from newglusterbugs: [Bug 1187456] Performance enhancement for RDMA <https://bugzilla.redhat.com/show_bug.cgi?id=1187456>
12:42 glusterbot News from newglusterbugs: [Bug 1187296] No way to gracefully rotate the libgfapi Samba vfs_glusterfs logfile. <https://bugzilla.redhat.com/show_bug.cgi?id=1187296>
12:42 glusterbot News from newglusterbugs: [Bug 1188066] logging improvements in marker translator <https://bugzilla.redhat.com/show_bug.cgi?id=1188066>
12:42 glusterbot News from newglusterbugs: [Bug 1191072] ipv6 enabled on the peer, but dns resolution fails with ipv6 and gluster does not fall back to ipv4 <https://bugzilla.redhat.com/show_bug.cgi?id=1191072>
12:53 awerner joined #gluster
12:57 sdebnath__ joined #gluster
12:58 anoopcs joined #gluster
13:01 deniszh joined #gluster
13:01 ricky-ticky joined #gluster
13:04 mbukatov joined #gluster
13:04 ricky-ticky3 joined #gluster
13:07 anoopcs_ joined #gluster
13:12 glusterbot News from newglusterbugs: [Bug 1146413] Symlink mtime changes when rebalancing <https://bugzilla.redhat.com/show_bug.cgi?id=1146413>
13:12 glusterbot News from newglusterbugs: [Bug 1158067] Gluster volume monitor hangs glusterfsd process <https://bugzilla.redhat.com/show_bug.cgi?id=1158067>
13:12 glusterbot News from newglusterbugs: [Bug 1158120] Data corruption due to lack of cache revalidation on open <https://bugzilla.redhat.com/show_bug.cgi?id=1158120>
13:12 glusterbot News from newglusterbugs: [Bug 1191100] Data corruption due to lack of cache revalidation on open <https://bugzilla.redhat.com/show_bug.cgi?id=1191100>
13:12 glusterbot News from resolvedglusterbugs: [Bug 1147236] gluster 3.6 compatibility issue with gluster 3.3 <https://bugzilla.redhat.com/show_bug.cgi?id=1147236>
13:12 glusterbot News from resolvedglusterbugs: [Bug 1170575] Cannot mount gluster share with mount.glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1170575>
13:12 glusterbot News from resolvedglusterbugs: [Bug 1117822] Tracker bug for GlusterFS 3.6.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1117822>
13:19 _Bryan_ joined #gluster
13:23 suman_d_ joined #gluster
13:34 ricky-ticky joined #gluster
13:39 wkf joined #gluster
13:49 Gill joined #gluster
13:55 atalur joined #gluster
13:59 rainlike joined #gluster
14:02 sdebnath__ joined #gluster
14:06 virusuy joined #gluster
14:10 dusmant joined #gluster
14:14 bennyturns joined #gluster
14:17 prasanth_ joined #gluster
14:18 calisto joined #gluster
14:26 meghanam joined #gluster
14:44 lmickh joined #gluster
14:44 elico joined #gluster
14:46 ildefonso joined #gluster
14:49 rainlike left #gluster
14:50 dgandhi joined #gluster
15:01 B21956 joined #gluster
15:05 anrao joined #gluster
15:11 georgeh-LT2 joined #gluster
15:12 glusterbot News from newglusterbugs: [Bug 1191163] /usr/lib/python2.7/site-packages/gluster/ not owned by package <https://bugzilla.redhat.com/show_bug.cgi?id=1191163>
15:15 wushudoin joined #gluster
15:19 jmarley joined #gluster
15:20 gildub joined #gluster
15:21 soumya_ joined #gluster
15:25 n-st joined #gluster
15:29 mbukatov joined #gluster
15:32 lalatenduM joined #gluster
15:39 suman_d_ joined #gluster
16:13 glusterbot News from newglusterbugs: [Bug 1191176] Since 3.6.2: failed to get the 'volume file' from server <https://bugzilla.redhat.com/show_bug.cgi?id=1191176>
16:21 ninkotech joined #gluster
16:21 ninkotech_ joined #gluster
16:21 jeffrin joined #gluster
16:27 MacWinner joined #gluster
16:28 jeffrin left #gluster
16:33 bennyturns joined #gluster
16:33 side_con1rol joined #gluster
16:39 side_control joined #gluster
16:40 mbukatov joined #gluster
16:43 eightyeight hmm. thought i had glusterbot filtered
16:43 * eightyeight doublechecks
16:45 rouge2507 left #gluster
16:45 CyrilPeponnet plop guys
16:45 CyrilPeponnet can someone explain what is op-version and how to make it consistent between nodes ?
16:47 ndevos CyrilPeponnet: normally the op-version should get updated when you update the version of glusterfs on the system
16:47 ndevos http://www.gluster.org/community/documentation/index.php/OperatingVersions contains a list of versions
16:47 CyrilPeponnet Yeah normaly
16:48 ndevos http://www.gluster.org/community/documentation/index.php/Features/Opversion contains more details about the feature
16:48 CyrilPeponnet But on my own it's inconsistent since aI try to add a 3.6 node on my 3.5 setup
16:48 ricky-ticky1 joined #gluster
16:48 ndevos right, then the op-version of the storage servers should be the one for the 3.5 version
16:48 tanuck joined #gluster
16:49 ndevos otherwise it'll try to enable 3.6 features on the 3.5 servers, which won't work
16:49 CyrilPeponnet sure but I have a mix between 3 and 2
16:49 CyrilPeponnet how to make them 2 :)
16:49 CyrilPeponnet or 3
16:49 CyrilPeponnet speaking of this
16:50 ndevos I think you can stop the glusterd process, and edit the glusterd.info file under /var/lib/glusterd
16:50 CyrilPeponnet I run 3.5.2.1 el7 an my op-version is 2
16:50 ndevos when you finished the editing, you should be able to start glusterd again
16:50 CyrilPeponnet it should be 30501 according to the guide
16:51 CyrilPeponnet I already make this uniform but some volumes still refers on another op-version
16:51 * ndevos doesnt know how the op-version should get updated...
16:52 ndevos maybe you could send an email to gluster-users@gluster.org and ask about it? the glusterd developers should know the details
16:52 CyrilPeponnet I will try that
16:52 calisto joined #gluster
16:53 CyrilPeponnet thx anyway
16:53 ndevos actually, I think there even is a 'gluster op-version' command, or something like that
16:56 kshlm joined #gluster
16:57 jmarley joined #gluster
17:03 harish joined #gluster
17:08 tdasilva joined #gluster
17:09 mattrixha joined #gluster
17:10 cvd-tyoung joined #gluster
17:14 _Bryan_ joined #gluster
17:15 T3 joined #gluster
17:16 syntaxerrors left #gluster
17:20 jdossey joined #gluster
17:20 XpineX joined #gluster
17:21 sdebnath__ joined #gluster
17:31 Sal_ joined #gluster
17:32 Sal_ hey guys
17:33 rcampbel3 joined #gluster
17:33 coredump joined #gluster
17:40 jmarley joined #gluster
17:45 plarsen joined #gluster
17:45 JoeJulian It's *supposed* to auto-adjust the op-version based on the feature set used. I'm not sure that's working.
17:55 dfrobins joined #gluster
17:55 dfrobins left #gluster
17:57 dfrobins joined #gluster
17:57 dfrobins left #gluster
17:58 dfrobins joined #gluster
17:58 dfrobins left #gluster
18:00 suman_d_ joined #gluster
18:08 harmw joined #gluster
18:10 harmw guys, I'm wondering where glusterfs realy shines when compared to ceph considering an openstack deployment
18:12 Rapture joined #gluster
18:14 JoeJulian GlusterFS is faster and easier to set up (by hours or days), but it's rigid. Replicas are defined at volume creation. Rebalance tools are useless so growing your cluster is cumbersome. Ceph handles losing a server better. The stuff that was on that server is automatically re-replicated somewhere else.
18:15 JoeJulian Gluster's management is shared among all the servers and the clients connect directly to every storage brick eliminating (potentially) single points of failure.
18:15 squizzi joined #gluster
18:15 harmw ok, but isn't that re-replicating something you'd always want to have?
18:15 harmw in terms of availability?
18:16 JoeJulian Ceph uses metadata servers, but you can (and should) have multiples of them to provide redundancy but they are a potential choke point.
18:16 harmw yup
18:16 JoeJulian In gluster, if a failed bit of storage is replaced, that's when the replica is repaired.
18:16 diegows joined #gluster
18:17 harmw are thre any specific resource requirements with gluster? since ceph is taking all to much ram
18:17 JoeJulian The resource utilization can be tuned.
18:18 harmw ok, but what if I'd install it on some ARM platform... :)
18:18 harmw would that perform?
18:18 JoeJulian I've heard of people installing it on rpi
18:19 harmw nah, I'm aiming for something with atleast gbit and sata
18:19 JoeJulian self-heal recovery performance would suck, but...
18:20 harmw self-heal? s in gluster detects a failing diskdrive (a brick?) and starts replicating the affected bits?
18:22 JoeJulian self-heal is what takes place if a brick was unavailable for update. Be it failed or just having a server offline.
18:22 JoeJulian Are you looking at using one of these? https://plus.google.com/+gregkroahhartman/posts/f8ubenKuSTZ
18:23 JoeJulian 'cause they're awesome. :D
18:23 harmw not specifically, but it looks nice
18:23 harmw I already have aa cubietruck3
18:23 JoeJulian I've got to make friends with Greg. He's got cool toys.
18:24 harmw does it come with gbit ?
18:25 harmw hm no ethernet whatsoever... :)
18:25 harmw I'd rather not do gluster over wifi.. :)
18:28 JoeJulian Oh, man. I missed that. :(
18:29 JoeJulian But an 8 gig arm64 for $130... not too shabby.
18:29 PeterA joined #gluster
18:30 JoeJulian So your board has really slow memory. That's going to potentially present a bottleneck due to context switching.
18:30 calisto joined #gluster
18:36 harmw yea
18:39 Pupeno joined #gluster
18:50 gkleiman joined #gluster
18:57 Ramereth joined #gluster
19:12 Philambdo joined #gluster
19:15 PeterA joined #gluster
19:24 tanuck joined #gluster
19:29 PeterA1 joined #gluster
19:34 harmw JoeJulian: you're familiar with cloud stuff like openstack?
19:35 syntaxerrors joined #gluster
19:43 JoeJulian harmw: yeah, you could say that.
19:44 harmw :)
19:44 harmw any pointers on how to add gluster to cinder and glance? specifically, some clever way to mimic Ceph's CoW
19:44 harmw if even possible
19:46 JoeJulian I wish, no. At least, not yet.
19:48 syntaxerrors Quick question that I may of missed in documentation. Does gluster have any sense of the balancing of % usage on each EBS volume. I know that a rebalance will take care of this but wanted to know if gluster would allow a volume to reach 100% if other less used volumes exist.
19:50 syntaxerrors or if it's simply round-robin you're out of luck ;)
19:50 JoeJulian dht works by hashing the filename and creating the file on the brick with the range assigned to accept that hash, so no. There's no logic to actively balance usage. It's more about balancing the likelihood of a file to exist on a given distribute subvolume. If one of those files exceeds the size of the brick, it will fill it up.
19:51 JoeJulian If a brick does git full, however, new files will be created on other bricks and a pointer will be created to show where the file actually is.
19:51 JoeJulian "git"... can you tell what I've been doing all day?
19:52 syntaxerrors JoeJulian: not at all ;)
19:59 PeterA joined #gluster
20:08 Pupeno joined #gluster
20:08 Pupeno joined #gluster
20:12 gildub joined #gluster
20:21 cvd-tyoung left #gluster
20:58 Pupeno joined #gluster
20:58 Pupeno joined #gluster
21:02 wkf_ joined #gluster
21:02 badone_ joined #gluster
21:04 tanuck joined #gluster
21:35 Philambdo joined #gluster
21:56 dgandhi joined #gluster
22:01 PeterA joined #gluster
22:01 redbeard joined #gluster
22:32 telmich joined #gluster
22:32 telmich good evening
22:33 telmich I try to mount a gluster volume that is running on two boxes with two nics from the public interfacs; the bricks are configured on the internal address; so far I get the error
22:33 telmich "Mount failed. Please check the log file for more details." without any log entry
22:37 JoeJulian Dumb question, but where are you looking for a log entry?
22:38 al joined #gluster
22:40 telmich JoeJulian: syslog
22:40 telmich + /var/log/gluster/*.log
22:41 JoeJulian /var/log/glusterfs/${ $mountpoint | tr '/' '-' ).log is where it should be.
22:41 telmich I *assume* that this fails because the bricks are on 192.168.0.1 and 192.168.0.2, which the other node cannot reach
22:42 al joined #gluster
22:42 telmich ups, indeed -  0-home-xfs-plain-client-0: connection to 192.168.0.1:24007 failed (No route to host)
22:42 telmich can I tell gluster somehow to announce different IPs for a brick?
22:43 diegows joined #gluster
22:43 al joined #gluster
22:44 JoeJulian @hostnames
22:44 glusterbot JoeJulian: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
22:44 JoeJulian Of course you'll need to recreate your volume using hostnames.
22:45 JoeJulian Then you just have to have the hostname resolve to where you want it to resolve.
22:46 telmich JoeJulian: I understand. is there any other way I could expose the glusterfs to an "external" host?
22:46 telmich (btw, did I explain well enough what kind of setup I have?)
22:46 JoeJulian VPN tunnel?
22:47 telmich ah, ok, in that case not from gluster directly
22:47 JoeJulian No, the bricks listen on ::0 and the way they're identified depends on how you create your volume.
22:47 JoeJulian If you create it with IPs, those are the only IPs it will announce.
22:48 JoeJulian Use hostnames and /etc/hosts or split-horizon dns to have different machines find those hostnames differently.
22:51 telmich ok
22:51 telmich I will give that a try
23:05 badone_ joined #gluster
23:06 siel joined #gluster
23:13 T3 joined #gluster
23:16 jaank joined #gluster
23:34 plarsen joined #gluster
23:36 B21956 left #gluster
23:37 T3 JoeJulian: just in case you miss the message on ml, that network issue I was expecting has gone
23:37 T3 thanks for pointing out the precious tip
23:37 T3 "blame network guys"
23:41 JoeJulian Wow! Glad it worked. :D
23:46 Gill joined #gluster
23:59 _Bryan_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary