Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 morse_ joined #gluster
00:06 cfeller_ joined #gluster
00:09 necrogami_ joined #gluster
00:09 afics joined #gluster
00:10 plarsen joined #gluster
00:10 tryggvil joined #gluster
00:11 lkoranda joined #gluster
00:11 Gugge joined #gluster
00:12 polychrise joined #gluster
00:20 n-st joined #gluster
00:25 daddmac left #gluster
00:32 MugginsM joined #gluster
00:38 tryggvil joined #gluster
00:40 jaank joined #gluster
00:45 Bosse joined #gluster
00:45 fandi joined #gluster
00:56 glusterbot News from newglusterbugs: [Bug 1181870] Geo-replication fails with OSError: [Errno 16] Device or resource busy <https://bugzilla.redhat.com/show_bug.cgi?id=1181870>
00:57 tryggvil joined #gluster
01:01 DarkBidou joined #gluster
01:01 DarkBidou hi
01:01 glusterbot DarkBidou: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
01:02 DarkBidou i totaly removed glusterfs and the page load now in 300ms, instead of 2800ms
01:02 DarkBidou do you recommand using glusterfs / replication for website ?
01:07 DV joined #gluster
01:10 plarsen joined #gluster
01:18 B21956 joined #gluster
01:19 B21956 left #gluster
01:20 sadbox joined #gluster
01:32 calisto joined #gluster
01:39 lyang0 joined #gluster
01:41 johnnytran joined #gluster
01:47 Durzo DarkBidou, i wouldnt serve up websites from gluster
01:47 Durzo DarkBidou, website code is static and does not change.. you can have a local copy on all your webservers and use gluster for a data drive that the webservers use to store dynamic data
01:48 Durzo like session cache
01:48 Durzo if you cant use a DB for that
01:50 a2 joined #gluster
01:52 nangthang joined #gluster
01:52 DarkBidou i mean PHP website
01:52 a2 joined #gluster
01:52 DarkBidou well, you werent there yesterday
01:52 DarkBidou but my setup like this, i have 1 varnish in front of 2 webserver with mariadb galera cluster
01:53 DarkBidou and the famous gluster fs
01:53 strata DarkBidou: lol i have almost the exact same setup here
01:53 DarkBidou to have a full redudant
01:53 strata mine is 1 haproxy, 2 varnish, 2 webserver, mariadb cluster, glusterfs
01:53 DarkBidou however, web page perform very slowly
01:53 Durzo i have the same setup too
01:53 Durzo and you do not store PHP files on gluster
01:53 DarkBidou i did and perf were horrible
01:54 DarkBidou the complete website (its drupal site)
01:54 Durzo our webserver frontends store a copy of the php code locally, with sessions going to db and theme cache going to gluster
01:54 Durzo we have drupal, moodle, wordpress etc
01:54 DarkBidou removing the site from the replica make page load from 2800ms to 300ms
01:54 Durzo drupal files dir goes to gluster
01:54 Durzo like i said, you dont want to be serving php from gluster
01:54 DarkBidou yes this is what i finally did
01:54 Durzo however
01:54 Durzo gluster shouldnt be giving you that much load
01:55 Durzo are your gluster servers in the same local network?
01:55 DarkBidou i was disapointed not to have the whole on gluster
01:55 bala joined #gluster
01:55 DarkBidou yes with gigabit network
01:55 MugginsM we serve a lot off stuff from gluster and it's quite fast
01:55 MugginsM off/of
01:55 Durzo ditto
01:55 Durzo but our setup is in AWS
01:56 DarkBidou i had 100mb/s writing, 160mbit/reading
01:56 Durzo i would still never dream of serving php from it though
01:56 MugginsM only gets slow with a lot of small files
01:56 DarkBidou AWS. can you host drupla site with that?
01:56 Durzo with small files its best to use gluster in NFS mode
01:56 Durzo but then you lose the failover abilility
01:56 Durzo DarkBidou, AWS EC2 is a regular virtual server, you can do whatever you want in it
01:57 MugginsM our gluster is on AWS, is fine
01:57 MugginsM there is a slight hit from the number of bricks we have, but not too bad
01:57 DarkBidou do you serv php site ?
01:57 Durzo MugginsM, do you use geo-repl by chance?
01:58 MugginsM no, just servers replicated between availability zones, on the same network
01:58 DarkBidou i try to use apc
01:58 DarkBidou to reduce impact
01:59 DarkBidou with the apc.stat = 0
01:59 MugginsM we don't use PHP, but it should be fine, 2800ms is a lot
01:59 Durzo MugginsM, iv got 2  bricks in replica mode, one in each AZ, with a geo-repl server in a 3rd AZ and geo-repl is constantly failing.. its unusable
01:59 DarkBidou its clearly too much for a web page
01:59 DarkBidou 300ms for complex page
01:59 DarkBidou overwise varnish does it in 10ms
01:59 MugginsM yeah, haven't used geo-repl
02:00 DarkBidou how about SSD ?
02:00 Durzo MugginsM, how do you do backup?
02:00 DarkBidou replica with SSD
02:00 MugginsM we snapshot backup
02:00 Durzo DarkBidou, my replicas are on 2x 250GB raid0 array
02:00 MugginsM and move the snapshots off to glacier
02:00 DarkBidou i mean, do you know what really is slow ?
02:00 Durzo MugginsM, LVM snapshot? or brick snapshots?
02:00 Durzo (ebs)
02:00 DarkBidou when having 2 brick in replica mode
02:00 DarkBidou TCP
02:01 MugginsM EBS drive snapshots, effectively bricks
02:01 MugginsM 12x2TB :)
02:01 Durzo MugginsM, what about gluster metadata?
02:01 MugginsM the root volumes are snapshotted at the same time
02:01 Durzo (/var/lib/glusterfs)
02:01 Durzo ok
02:02 Durzo we found our volumes changed too frequently to get a good ebs snapshot
02:02 MugginsM we've done a couple of recovery drills and it was fine. very slow, but it all came back
02:02 harish joined #gluster
02:02 MugginsM yeah, potentially a little bit of FS corruption, but XFS has been pretty tough
02:03 MugginsM and we tend not to alter files in place, it's always create new or append
02:03 Gill joined #gluster
02:03 DarkBidou about little php and the load generated by gluster, is there something we can do ?
02:03 DarkBidou like a "read with no lock"
02:04 DarkBidou or "optimistic"
02:04 Durzo DarkBidou, cache
02:04 Durzo DarkBidou, varnish infront of apache, using APC or xcache
02:04 MugginsM yeah, caching gets big wins
02:04 Durzo if its drupal, use the internal drupal cacher too
02:04 DarkBidou ive done caching but i dont get why the cpu load this way
02:04 DarkBidou http://pastebin.com/xUqmEr4E
02:04 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
02:04 Durzo DarkBidou, hmm
02:04 DarkBidou this is my setup
02:05 Durzo DarkBidou, heres something
02:05 Durzo we had apache pointing at gluster once before and noticed high cpu
02:05 Durzo i found that every page request, apache looks for a .htaccess file
02:05 Durzo you can disable this in the vhost config
02:05 MugginsM yeah, that .htaccess thing can be a killer
02:05 DarkBidou humm
02:06 DarkBidou very intresting
02:06 Durzo AllowOverride: When this directive is set to None, then .htaccess files are completely ignored. In this case, the server will not even attempt to read .htaccess files in the filesystem.
02:06 DarkBidou im old i guess, i use apache instead of nginx
02:07 Durzo http://httpd.apache.org/docs/2.4/mod/core.html#allowoverride
02:08 haomaiwa_ joined #gluster
02:09 DarkBidou how about apache mod_file_cache
02:10 DarkBidou instead of disabling ?
02:10 Durzo file cache would defeat the purpose of gluster
02:10 Durzo you may aswell store the files locally
02:14 johnnytran joined #gluster
02:20 DarkBidou Durzo: so true... :S
02:26 DarkBidou http://colin.mollenhour.com/2010/06/30/the-right-way-to-optimize-apaches-htaccess-files/
02:27 DarkBidou this article is good
02:27 DarkBidou http://www.sirgroane.net/2010/03/tuning-glusterfs-for-apache-on-ec2/ <-- this one confirm the .htaccess pb
02:27 bharata-rao joined #gluster
02:27 glusterbot DarkBidou: <'s karma is now -12
02:27 Durzo stating the obvious.. yeah you gotta take the contents of your .htaccess and move it into the apache vhost, then disable it via AllowOverride - thats it
02:35 harish joined #gluster
02:38 bala joined #gluster
02:40 hagarth joined #gluster
02:45 nrcpts joined #gluster
02:45 schrodinger_ joined #gluster
02:48 owlbot joined #gluster
02:48 Staples84 joined #gluster
02:51 DarkBidou i've done it, but there is no real gain
02:57 DarkBidou how do you know what file/process is accessed during a web hit ?
02:57 Durzo sysdig
03:00 strata joined #gluster
03:00 DarkBidou thanks :)
03:02 JordanHackworth joined #gluster
03:03 nangthang joined #gluster
03:04 DarkBidou sysdig evt.type=open and fd.name contains /my_gluster_path ?
03:04 DarkBidou something like this ?
03:06 masterzen joined #gluster
03:08 DarkBidou maybe im not in the right chanel, but php-fpm is reading php while APC is enabled
03:11 Durzo what kind of latency do you get when you ping the gluster servers from the webserver?
03:12 DarkBidou the webserver is on the gluster server
03:12 siel joined #gluster
03:14 DarkBidou apc.stat = 0
03:14 DarkBidou in case you ask :)
03:17 johnnytran joined #gluster
03:18 Durzo wait what
03:18 Durzo your webserver is also your gluster server?
03:18 Durzo so your mounting the gluster server via localhost or something?
03:18 MugginsM that's really slow for local
03:18 MugginsM like broken slow
03:18 DarkBidou via localhost yes
03:18 Durzo yeah
03:18 Durzo but also
03:19 Durzo if if your doing it that way, why the need for gluster at all?
03:19 DarkBidou redundancy
03:19 Durzo but PHP code doesnt change
03:19 DarkBidou i have over webserver linked with that one
03:19 Durzo keep it in a git repo or something
03:20 DarkBidou yes removing PHP from the gluster directory produce a huge boost
03:20 DarkBidou however, i wonder why all these IO
03:20 DarkBidou while i have APC enabled
03:20 DarkBidou with apc.stat=0
03:24 JordanHackworth joined #gluster
03:29 DV joined #gluster
03:36 masterzen joined #gluster
03:38 hagarth joined #gluster
03:39 DarkBidou could it be the call to method like "file_exists()" that, even with APC, perform really a disk operation
03:44 JordanHackworth joined #gluster
03:47 DarkBidou well, apc reduce it but does not remove all file system calls
03:48 DarkBidou maybe its the PHP who is this way (compare to java, compiled with tomcat)
03:53 johnnytran joined #gluster
03:54 DarkBidou i gotta go, thank everyone
03:56 hagarth joined #gluster
04:00 masterzen joined #gluster
04:04 suman_d joined #gluster
04:08 chirino joined #gluster
04:15 masterzen joined #gluster
04:24 kshlm joined #gluster
04:27 siel joined #gluster
04:37 plarsen joined #gluster
04:40 DV joined #gluster
04:42 gem joined #gluster
04:45 vikumar joined #gluster
04:53 masterzen joined #gluster
05:03 sage_ joined #gluster
05:08 smohan joined #gluster
05:10 siel joined #gluster
05:17 fubada joined #gluster
05:18 fubada joined #gluster
05:22 zerick joined #gluster
05:35 masterzen joined #gluster
05:37 Bosse joined #gluster
05:48 ramteid joined #gluster
05:53 lanning joined #gluster
05:59 anoopcs joined #gluster
06:00 nrcpts joined #gluster
06:03 hagarth joined #gluster
06:06 nshaikh joined #gluster
06:14 nrcpts joined #gluster
06:20 misch joined #gluster
06:25 nbalacha joined #gluster
06:29 SOLDIERz joined #gluster
06:34 aravindavk joined #gluster
06:35 ghenry joined #gluster
06:59 misch joined #gluster
07:00 nbalacha joined #gluster
07:05 ricky-ticky1 joined #gluster
07:08 JordanHackworth joined #gluster
07:09 masterzen joined #gluster
07:11 jtux joined #gluster
07:13 jkroon joined #gluster
07:13 nangthang joined #gluster
07:19 sage_ joined #gluster
07:26 JordanHackworth joined #gluster
07:31 ctria joined #gluster
07:36 rafi1 joined #gluster
07:37 soumya_ joined #gluster
07:44 Philambdo joined #gluster
08:00 fandi joined #gluster
08:05 deniszh joined #gluster
08:26 DV joined #gluster
08:29 fsimonce joined #gluster
08:30 jamesc joined #gluster
08:32 crashmag joined #gluster
08:37 lanning joined #gluster
08:37 jbrooks joined #gluster
08:45 ronis joined #gluster
08:50 DV joined #gluster
08:58 glusterbot News from newglusterbugs: [Bug 1181977] gluster vol clear-locks vol-name path kind all inode return IO error in a disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1181977>
08:59 SGTItlog joined #gluster
09:02 jaank joined #gluster
09:04 ronis joined #gluster
09:10 johnnytran joined #gluster
09:10 tryggvil joined #gluster
09:11 Slashman joined #gluster
09:12 mbukatov joined #gluster
09:18 JordanHackworth joined #gluster
09:31 ricky-ticky joined #gluster
09:39 T0aD joined #gluster
09:44 johnnytran joined #gluster
09:46 rjoseph joined #gluster
09:57 eljrax joined #gluster
09:58 glusterbot News from newglusterbugs: [Bug 764977] [FEAT] perform a checksum calculation on files before performing any internal deletes <https://bugzilla.redhat.com/show_bug.cgi?id=764977>
09:58 glusterbot News from newglusterbugs: [Bug 764705] [FEAT] Type Ahead <https://bugzilla.redhat.com/show_bug.cgi?id=764705>
09:59 DV joined #gluster
10:07 misch_ joined #gluster
10:08 owlbot joined #gluster
10:38 badone joined #gluster
10:45 harish joined #gluster
10:54 elico joined #gluster
10:55 jkroon [2015-01-14 10:53:22.375672] I [afr-self-heal-entry.c:1837:afr_sh_entry_common_lookup_done] 0-www_shared-replicate-0: /themes/third_party/foo: Skipping entry self-heal because of gfid absence
10:55 jkroon [2015-01-14 10:53:22.381733] E [afr-self-heal-common.c:2212:afr_self_heal_completion_cbk] 0-www_shared-replicate-1: background  entry self-heal failed on /themes/third_party
10:55 jkroon any ideas on how I can go about trying to figure that one out?
10:56 jkroon essentially we can't get a directory listing on the folder themes/third_party, but everything else seems to be working.
10:58 Bosse joined #gluster
11:00 jkroon ./www_shared_b/themes/third_party/foo
11:00 jkroon ./www_shared_b/.glusterfs/00/00/00000000-0000-0000-0000-000000000000
11:00 jkroon that gfid link seems to be wrong ?!
11:01 keds joined #gluster
11:08 jiku joined #gluster
11:15 hchiramm joined #gluster
11:19 Dw_Sn joined #gluster
11:19 jkroon ok, so somehow the system generated a gfid value of 0, and I'm guessing that's the sentinel value.  Removing the file from the bricks resolved the issue, and then subsequently since the .glusterfs/00/00/00000000-0000-0000-0000-000000000000 had a link count of 1 I could nuke that too.
11:19 jkroon now everything is fine again.
11:20 Dw_Sn any idea how I can get some statistics about the vol ?
11:20 jkroon depends, what stats are you looking for?
11:21 JordanHackworth joined #gluster
11:21 Dw_Sn like some iostats
11:23 partner Dw_Sn: maybe this? https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_monitoring_workload.md
11:31 Dw_Sn partner: wonderful , thank you
11:32 Dw_Sn Duration: 1891968 seconds
11:33 Dw_Sn !?
11:35 Dw_Sn is this since the profile starts ?
11:36 aravindavk joined #gluster
11:38 partner volume uptime
11:39 partner sorry, meant *brick* uptime
11:40 partner the duration is presented for each brick individually and if you for example reboot one box those bricks will have very low "duration"
12:00 jdarcy joined #gluster
12:06 hchiramm joined #gluster
12:07 jdarcy_ joined #gluster
12:17 Slashman_ joined #gluster
12:17 athinkingmeat joined #gluster
12:19 social joined #gluster
12:28 tryggvil joined #gluster
12:32 LebedevRI joined #gluster
12:39 TvL2386 joined #gluster
12:49 soumya joined #gluster
12:50 Azaril joined #gluster
12:50 Azaril hi
12:50 glusterbot Azaril: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:50 Azaril my data doesnt seem to be replicating
12:51 Azaril ive followed these basic steps
12:51 Azaril http://www.jamescoyle.net/how-to/435-setup-glusterfs-with-a-replicated-volume-over-2-nodes
12:51 Azaril but ive generated files on one server and it hasnt appeared on the other
13:08 partner Azaril: are you writing straight to the bricks?
13:08 Dw_Sn joined #gluster
13:09 partner you will need to do all the activity via client mount, see the last link on that page you linked - To see how to use your volume, see our guide to mounting a volume.
13:09 partner that being: http://www.jamescoyle.net/how-to/439-mount-a-glusterfs-volume
13:10 Azaril ok
13:10 Azaril ive mounted it now
13:10 Azaril it doesnt seem to be replicating though
13:11 Azaril should the mount be the same as the brick location?
13:11 partner if you wrote straight to the bricks they will never get replicated
13:11 Azaril erm
13:11 Azaril do i need to wipe it and start again then?
13:12 Azaril ive mounted it to a new place now, but i had previously been writing test files to the bricks
13:12 partner well remove the test files and the bricks should be ok
13:13 partner then go and write the test files via the glusterfs mount and you should see them appearing on all the bricks of the replica
13:13 Azaril hmmm doesnt seem to be
13:13 Azaril oh
13:14 Azaril no, doesnt seem to be...
13:14 Azaril ill start again
13:15 partner well, redoing it all does not take that long, its a good practise :)
13:18 hagarth joined #gluster
13:21 Azaril ok
13:21 Azaril so i have to write to the client mount but read from the brick?
13:22 Gill joined #gluster
13:22 Azaril ive written a file to client mount1, its appeared in brick 1 and brick 2 but not in client mount2
13:24 partner no
13:24 partner forget the bricks
13:24 partner you basically never ever touch them or access them from this moment forward
13:24 Azaril hmm
13:24 partner they are part of the volume, you access the volume by mounting it as descriped on the link i gave
13:24 Azaril it seems one of the clients isnt working properly
13:26 Azaril can you only have one client?
13:26 partner many
13:27 Azaril hum
13:27 Azaril well this isnt working properly then
13:27 partner i have no idea what you have done so a bit hard to help here
13:27 partner i'm sure its something simple as you already have one functional client there, right?
13:27 Azaril ah
13:28 Azaril there it goes
13:28 Azaril think ive got it
13:28 partner great
13:28 Azaril boom
13:28 Azaril cheers dude
13:28 Azaril thanks very much
13:30 partner np
13:37 _Bryan_ joined #gluster
13:46 Dw_Sn joined #gluster
13:47 nshaikh joined #gluster
13:47 jmarley joined #gluster
13:51 plarsen joined #gluster
13:57 dgandhi joined #gluster
13:59 kkeithley partner++
13:59 glusterbot kkeithley: partner's karma is now 8
14:01 Durzo anyone got any ideas about what is going on with bug 1181870 ?
14:01 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1181870 high, unspecified, ---, bugs, NEW , Geo-replication fails with OSError: [Errno 16] Device or resource busy
14:02 harish joined #gluster
14:02 julim joined #gluster
14:02 rgustafs joined #gluster
14:06 calisto joined #gluster
14:11 fandi joined #gluster
14:12 mator Durzo, reboot and try again ?
14:13 bennyturns joined #gluster
14:16 jkroon hi, when starting glusterd - is there any way to query whether glusterd has fully started up?
14:16 jkroon ie, that it's ready to start accepting mounts?
14:17 jdarcy joined #gluster
14:17 jkroon dumb question - guesing that if gluster volume list works all should be ok ...
14:20 calisto joined #gluster
14:21 diegows joined #gluster
14:23 aravindavk joined #gluster
14:23 Gill joined #gluster
14:24 smohan joined #gluster
14:29 calisto joined #gluster
14:29 glusterbot News from newglusterbugs: [Bug 1182145] mount.glusterfs doesn't support mount --verbose <https://bugzilla.redhat.com/show_bug.cgi?id=1182145>
14:29 ckotil joined #gluster
14:35 DV joined #gluster
14:40 scuttle|afk joined #gluster
14:41 calisto joined #gluster
14:51 nishanth joined #gluster
14:53 TvL2386 joined #gluster
14:53 doekia joined #gluster
15:01 tdasilva joined #gluster
15:02 coredump joined #gluster
15:07 wushudoin joined #gluster
15:08 DV joined #gluster
15:17 squizzi joined #gluster
15:19 virusuy joined #gluster
15:19 virusuy joined #gluster
15:20 theron joined #gluster
15:23 dgandhi greetings, is there some way to look up the mapping interval of a directory ?
15:32 dbruhn joined #gluster
15:42 jdarcy dgandhi: What do you mean "mapping interval"?
15:45 dgandhi jdarcy: I have an issue where many identically named files in similarly named dirs are ending up on the same brick, I found some email chains that suggest that files are placed base on basename and something called "mapping interval" - which would be an important variable to ‏figure out my issue.
15:45 dbruhn Hey guys, been a while. Have a question on the new AFR in 3.6. Does the new version of AFR solve a lot of the split brain issues that seemed to crop up back in 3.2/3.3
15:48 jdarcy dgandhi: Not sure about the terminology, but you should be able to get the layout for a directory by fetching the (fake) xattr trusted.glusterfs.pathinfo on it.
15:49 jdarcy dgandhi: The layout is what gets combined with the hash of a file's basename to determine where it should go.
15:50 jdarcy dbruhn: Newer versions of AFR have certainly fixed a lot of split-brain issues, some by enforcing quorum and some by other means.  I'm just as sure others remain.
15:52 squizzi joined #gluster
15:54 kshlm joined #gluster
15:54 kshlm joined #gluster
15:56 dbruhn jdarcy, good to know, with quorum do you need to run replica 3 or better?
15:59 _Bryan_ joined #gluster
16:00 jdarcy dbruhn: You can run it with two-way replication, but quorum for N=2 is a bit funky.
16:01 dbruhn I am thinking about running brick servers with replica 2. Last I remember replica 2 quorum is limited to just stopping a volume right?
16:01 deepakcs joined #gluster
16:02 sauce joined #gluster
16:05 jobewan joined #gluster
16:06 hchiramm joined #gluster
16:09 roost joined #gluster
16:17 rwheeler joined #gluster
16:20 Azaril can you do minor version updates on gluster without stopping anything?
16:20 Azaril ie if i just install a new debian will it break anything?
16:22 dbruhn_ joined #gluster
16:25 lmickh joined #gluster
16:29 suman_d joined #gluster
16:30 hchiramm joined #gluster
16:38 coredump joined #gluster
16:39 hagarth joined #gluster
17:00 doekia joined #gluster
17:00 Azaril if i have 3 replicas, what is the default behaviour if one server goes down?
17:03 fubada hi purpleidea do you have a new branch of your gluster for me to try :)
17:07 PeterA joined #gluster
17:10 tryggvil joined #gluster
17:10 tom[] is glusterbot a willie?
17:11 bala joined #gluster
17:17 calisto joined #gluster
17:24 ronis joined #gluster
17:30 glusterbot News from resolvedglusterbugs: [Bug 1138897] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=1138897>
17:30 dgandhi How do I check the xattrs for a gluster volume (trusted.glusterfs.pathinfo) at the fuse mount, or on the brick FS? I can't find any xattrs in either on a volume I'm trying to troubleshoot.
17:37 diegows joined #gluster
17:41 ekuric joined #gluster
17:45 semiosis dgandhi: ,,(pathinfo)
17:45 glusterbot dgandhi: find out which brick holds a file with this command on the client mount point: getfattr -d -e text -n trusted.glusterfs.pathinfo /client/mount/path/to.file
17:46 semiosis @version
17:46 glusterbot semiosis: The current (running) version of this Supybot is 0.83.4.1+limnoria 2014.11.24, running on Python 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)  [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)].  The newest versions available online are 2015.01.11 (in testing), 2014.12.22 (in master).
17:46 semiosis tom[]: ^^
18:19 DV joined #gluster
18:20 uebera|| joined #gluster
18:24 tom[] semiosis: tnx
18:24 semiosis yw
18:30 glusterbot News from newglusterbugs: [Bug 1182267] compile warnings with gcc 5.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1182267>
18:30 tryggvil joined #gluster
18:35 keds joined #gluster
18:37 CyrilPeponnet hep guys
18:37 CyrilPeponnet some help with geo-rep ?
18:38 CyrilPeponnet command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
18:38 CyrilPeponnet denied, but create push-pem force was successfull
18:39 CyrilPeponnet master -> slave root passwordless is fine
18:43 rafi1 joined #gluster
18:44 CyrilPeponnet pub keys are not pushed to my authorized_keys
18:55 rolfb joined #gluster
19:01 CyrilPeponnet looks like peer_add_secret_pub is never executed on the slave
19:01 CyrilPeponnet anyone?
19:04 semiosis CyrilPeponnet: wish i could help but idk much about geo-rep
19:09 CyrilPeponnet appending the common_secret_pub.pem to authorized_keys do the trick but his should be done by push-pem...
19:11 calisto joined #gluster
19:13 jmarley joined #gluster
19:37 ira joined #gluster
19:37 javi404 joined #gluster
19:39 bene2 joined #gluster
19:41 Gill joined #gluster
19:44 JustinClift CyrilPeponnet: Definitely ask on the gluster mailing list.  It could be a bug, which will need to become known about and fixed by the geo-rep dev's
20:08 Gill joined #gluster
20:10 lmickh joined #gluster
20:14 deniszh joined #gluster
20:15 B21956 joined #gluster
20:31 Intensity joined #gluster
20:33 rcampbel3 joined #gluster
20:39 rafi1 joined #gluster
20:43 calisto joined #gluster
20:48 y4m4_ joined #gluster
20:53 B21956 joined #gluster
21:02 B21956 joined #gluster
21:12 plarsen joined #gluster
21:14 roost joined #gluster
21:15 rwheeler joined #gluster
21:17 misko_ [2015-01-14 21:10:14.558320] E [xlator.c:425:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again
21:17 misko_ is this a known issue in centos7?
21:21 neofob left #gluster
21:29 badone joined #gluster
21:30 JoeJulian misko_: where is that from?
21:31 misko_ /var/log/gluster/shared.log
21:31 misko_ glusterfs-api-3.6.1-1.el7.x86_64
21:31 misko_ glusterfs-3.6.1-1.el7.x86_64
21:31 misko_ glusterfs-fuse-3.6.1-1.el7.x86_64
21:31 misko_ glusterfs-server-3.6.1-1.el7.x86_64
21:31 misko_ glusterfs-libs-3.6.1-1.el7.x86_64
21:31 misko_ glusterfs-cli-3.6.1-1.el7.x86_64
21:31 JoeJulian gah
21:31 misko_ but i have to say I had to install with --nodeps, because -api depends on python 2.6 (i have 2.7)
21:32 misko_ i'm a bit confused
21:32 JoeJulian Sounds like a packaging bug.
21:32 JoeJulian Use fpaste.org and paste that client log from start to failure.
21:32 misko_ ok
21:32 misko_ w8
21:33 misko_ haha fpaste is down
21:33 misko_ http://pastebin.com/TjsgqKjA
21:33 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:33 misko_ glusterbot: fpaste is down
21:34 JoeJulian wow, that's pretty useless isn't it.
21:35 misko_ what doyou mean.
21:35 misko_ ?
21:35 JoeJulian I mean the log tells us nothing.
21:35 misko_ (sorry my spacebarisbroken)
21:35 misko_ Well it is
21:35 JoeJulian Try /usr/sbin/glusterfs --debug --volfile-server=xfc0 --volfile-id=/disks /shared/isos
21:35 misko_ [2015-01-14 21:35:47.813603] D [MSGID: 0] [glusterfsd.c:572:create_fuse_mount] 0-glusterfsd: failed to initialize fuse translator
21:36 JoeJulian Use pastebin for the whole log please.
21:36 misko_ ok
21:37 misko_ http://pastebin.com/mKuKL8Hb
21:37 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:37 JoeJulian Hush, glusterbot.
21:39 JoeJulian Aha
21:39 JoeJulian Add LC_NUMERIC = "en_US.UTF-8" to /etc/locale.conf
21:40 JoeJulian Looks like there's a floating point conversion error.
21:40 JoeJulian That will allow it to mount from fstab
21:41 misko_ ok do i have to restart something?
21:41 misko_ (because simple addition did not work)
21:41 JoeJulian From the command line, you should just be able to: "export LC_NUMERIC=en_US.UTF-8" and try again.
21:42 misko_ same shit.
21:42 misko_ [root@xfc0 ~]# set|grep LC_N
21:42 misko_ LC_NUMERIC=en_US.UTF-8
21:42 JoeJulian http://www.gluster.org/pipermail/gluster-users/2014-December/019912.html
21:44 misko_ should i try putting to fstab?
21:45 misko_ because cmdline still does not work.
21:45 JoeJulian according to bug 1117591 they suggest LC_NUMERIC should be "C": env -i LC_NUMERIC=C /usr/sbin/glusterfs --volfile-server=xfc0 --volfile-id=/disks /shared/isos
21:45 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1117591 is not accessible.
21:47 misko_ okok with LC_ALL=en_US different problem comes up
21:47 misko_ so thanks for now :)
21:52 misko_ f*king selinux
21:52 JoeJulian Heh
21:54 misko_ JoeJulian: i'm sorry but LC_NUMERIC is not enough
21:54 misko_ with LC_ALL=C it works
21:56 misko_ ok i have 4 bricks, 296G each, replicated, and the mounted filesystem is 50G only.
21:58 JoeJulian I've never seen that.
21:58 JoeJulian Replica 2 should give 592G, replica 4 would be 296G.
21:59 JoeJulian So I would look at brick paths, and your mount point. See if you're reading the disk size of something else.
22:03 misko_ replica 4.
22:03 misko_ xfc0:/disks                50G  1.2G   49G   3% /shared/isos
22:04 misko_ this is the brick dir
22:04 misko_ /dev/mapper/centos-disks  296G   65M  281G   1% /gluster/1
22:04 misko_ (*4)
22:05 JoeJulian ~pasteinfo | misko_
22:05 glusterbot misko_: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
22:07 misko_ http://fpaste.org/169791/12732221/
22:13 calisto joined #gluster
22:14 JoeJulian I cannot find any logical reason for that to happen.
22:14 JoeJulian You didn't define quotas, did you?
22:15 misko_ no i did not
22:15 misko_ The only thing that comes up to my mind is that whatever i play with, i find bugs instantly.
22:17 JoeJulian Yeah, but there's no bug that would turn 296G into 50G. There's a mathematical disconnect there.
22:21 misko_ i'll run dd overnight.
22:21 misko_ and see what happens
22:24 diegows joined #gluster
22:28 calisto joined #gluster
22:41 jobewan joined #gluster
22:54 Gill joined #gluster
22:54 Pupeno joined #gluster
22:54 Pupeno joined #gluster
23:07 Pupeno joined #gluster
23:07 Pupeno joined #gluster
23:16 sputnik13 joined #gluster
23:19 jmills joined #gluster
23:20 captainflannel testing gluster out for a new internal project, is there any benefit to using a dedicated gluster-client machine to host the volume over NFS?
23:29 gildub joined #gluster
23:36 Pupeno joined #gluster
23:47 JoeJulian captainflannel: No? I'm not entirely sure I understand what you're asking though.
23:53 captainflannel hello, so we have a need to have a very large data repository available via NFS and SMB.  I'm thinking to use glusterfs.  My question, would we get better performance to use a dedicated "client" to mount the gluster volume and share via smb/nfs or use the gluster-server to host the volume over SMB/NFS
23:54 JoeJulian GlusterFS already provides nfs shares. That's the most efficient way for that.
23:55 JoeJulian As for samba, that depends on your hardware and load. Use the libgfapi vfs though.
23:57 captainflannel is libgfapi a built in option for samba?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary