Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-11-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 JoeJulian kminooie: It got cut off at "trying to"
00:07 kminooie I am trying to mount gluster over nfs. I can do that on command line with no problem. but when I try to do the same thing with autofs, I get 'Remote I/O error' when I try to access the directory ( ls ), and I get this line in gluster nfs log file:  0-rpc-service: RPC program version not available (req 100003 2) .
00:07 kminooie Now, what I am trying to ask is if it is possible for autofs to some how downgrade the nfs version from 3 to 2 ?  I do use the vers=3,proto=tcp options with both command line and autofs.
00:08 zhangjn joined #gluster
00:09 zhangjn joined #gluster
00:10 zhangjn joined #gluster
00:12 kminooie JoeJulian: did it come through?
00:14 bennyturns joined #gluster
00:14 JoeJulian It did. And it doesn't seem likely. What seems more probable is that it's not picking up the mount options at all.
00:14 JoeJulian tbh, I haven't used automount for over 10 years so I'm probably not much help.
00:19 kminooie i can see the options ( vers=3,proto=tcp) in 'mount' output, so I don't think not getting picked up is the issue
00:23 hgichon joined #gluster
00:29 gildub_ joined #gluster
00:32 mlhamburg_ joined #gluster
00:59 shyam joined #gluster
01:07 zhangjn joined #gluster
01:08 zhangjn_ joined #gluster
01:09 bluenemo JoeJulian, just did the upgrade as you told me, worked perfectly!
01:11 bluenemo ah look at that. i upgraded, did the full self heal again, now the 180 files are gone :) ah what a nice day
01:11 bluenemo going to sleep now or sth. thanks a lot for your help today JoeJulian :)
01:11 JoeJulian excellent!
01:11 JoeJulian You're welcome.
01:11 bluenemo excellent indeed :)
01:11 bluenemo i'm especially happy about the way it failovered just without ANY trouble
01:11 bluenemo pkill and the webworkers didnt even notice
01:12 bluenemo thats what i'm talking about :) Very happy now :)
01:20 nangthang joined #gluster
01:24 bluenemo so cool, now i can shoot any one machine in my setup - production tested. this is so nifty. i'll see how this is going, then i'll switch my other HA customers to gluster i think
01:27 Lee1092 joined #gluster
01:30 JoeJulian It's always nice to add another tool to the ol' tool belt.
01:35 bluenemo especially if it provides redundancy :)
01:54 plarsen joined #gluster
02:01 F2Knight joined #gluster
02:11 nangthang joined #gluster
02:23 marbu joined #gluster
02:23 johnmark_ joined #gluster
02:23 Ludo__ joined #gluster
02:23 Champi_ joined #gluster
02:24 bhuddah joined #gluster
02:24 kbyrne joined #gluster
02:24 tg2 joined #gluster
02:24 atrius joined #gluster
02:24 F2Knight joined #gluster
02:24 coreping joined #gluster
02:24 plarsen joined #gluster
02:25 rwheeler joined #gluster
02:25 Sadama joined #gluster
02:25 sac joined #gluster
02:25 marbu joined #gluster
02:25 m0zes joined #gluster
02:25 ahino joined #gluster
02:26 dblack joined #gluster
02:26 cuqa_ joined #gluster
02:26 dusmant joined #gluster
02:29 haomaiwa_ joined #gluster
02:34 kminooie hey does anyone know what this line means in nfs.log  "0-netstorage-client-0: Using Program GlusterFS 3.3"  the version that I've installed is 3.6.6-1 ( from gluster repo on wheezy )
02:36 kminooie the entire line ( second line after starting the service )  Using Program GlusterFS 3.3, Num (1298437), Version (330). do I have the correct version?
02:39 haomaiwang joined #gluster
02:45 hagarth_ kminooie: 3.3 refers to the rpc version used. glusterfs --version is a better indicator of the version you are running.
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:47 DV joined #gluster
02:50 kminooie thanks hagarth_
02:50 kminooie left #gluster
02:57 shubhendu joined #gluster
03:01 harish__ joined #gluster
03:02 mlncn joined #gluster
03:06 jtux joined #gluster
03:17 bharata-rao joined #gluster
03:27 jordie joined #gluster
03:29 aravindavk joined #gluster
03:36 jordie joined #gluster
03:37 sesa joined #gluster
03:37 sesa_ joined #gluster
03:40 harish joined #gluster
03:40 ppai joined #gluster
03:44 nangthang joined #gluster
03:51 overclk joined #gluster
03:52 armyriad joined #gluster
03:57 rafi joined #gluster
03:59 rafi joined #gluster
04:06 rafi joined #gluster
04:08 rafi joined #gluster
04:11 atinm joined #gluster
04:12 dlambrig_ joined #gluster
04:13 rafi joined #gluster
04:13 kdhananjay joined #gluster
04:15 itisravi joined #gluster
04:16 dusmant joined #gluster
04:19 aravindavk joined #gluster
04:22 gem joined #gluster
04:23 TheSeven joined #gluster
04:25 mlncn joined #gluster
04:26 nishanth joined #gluster
04:33 rafi joined #gluster
04:33 sakshi joined #gluster
04:34 trapier joined #gluster
04:36 skoduri joined #gluster
04:36 kanagaraj joined #gluster
04:47 rafi joined #gluster
04:51 rafi joined #gluster
04:53 rafi joined #gluster
04:58 rafi joined #gluster
04:59 nbalacha joined #gluster
05:00 pppp joined #gluster
05:00 rafi joined #gluster
05:04 vimal joined #gluster
05:05 ramky joined #gluster
05:06 rafi joined #gluster
05:07 nbalacha joined #gluster
05:09 rafi joined #gluster
05:13 anil joined #gluster
05:17 rafi joined #gluster
05:25 neha_ joined #gluster
05:27 rjoseph joined #gluster
05:28 Bhaskarakiran joined #gluster
05:32 ndarshan joined #gluster
05:33 jiffin joined #gluster
05:40 hgowtham joined #gluster
05:40 Manikandan joined #gluster
05:43 rafi joined #gluster
05:48 F2Knight joined #gluster
06:00 hgowtham joined #gluster
06:01 vmallika joined #gluster
06:01 ashiq joined #gluster
06:12 atalur joined #gluster
06:28 kdhananjay joined #gluster
06:33 Saravana_ joined #gluster
06:36 sankarshan joined #gluster
06:36 Manikandan joined #gluster
06:37 atalur_ joined #gluster
06:37 spalai joined #gluster
06:56 PaulCuzner joined #gluster
06:58 PaulCuzner joined #gluster
06:58 mlhamburg1 joined #gluster
07:03 [Enrico] joined #gluster
07:07 mhulsman joined #gluster
07:07 jiffin joined #gluster
07:08 gem joined #gluster
07:09 EinstCrazy joined #gluster
07:16 Saravana_ joined #gluster
07:22 kshlm joined #gluster
07:31 hos7ein joined #gluster
07:37 Manikandan joined #gluster
07:44 RedW joined #gluster
07:56 zhangjn joined #gluster
07:59 zoldar joined #gluster
08:05 DV_ joined #gluster
08:07 mhulsman1 joined #gluster
08:10 overclk_ joined #gluster
08:15 jwd joined #gluster
08:22 jwaibel joined #gluster
08:22 kovshenin joined #gluster
08:25 nangthang joined #gluster
08:26 kovshenin joined #gluster
08:27 kovshenin joined #gluster
08:28 kovshenin joined #gluster
08:31 mhulsman joined #gluster
08:33 kovshenin joined #gluster
08:33 mhulsman1 joined #gluster
08:34 fsimonce joined #gluster
08:35 kanagaraj joined #gluster
08:35 shortdudey123_ joined #gluster
08:37 Merlin__ joined #gluster
08:42 nadley joined #gluster
08:42 nadley hi all
08:45 nadley small question, I have two gluster server providing a volume with a replica of 2. I would like to upgrade the servers for new one. I should I do that ? For information the IP of the new servers will not be the same of the old ones
08:51 Manikandan joined #gluster
08:53 deepakcs joined #gluster
08:55 PaulCuzner left #gluster
08:58 ctria joined #gluster
08:59 haomaiwa_ joined #gluster
08:59 Saravana_ joined #gluster
09:01 haomaiwa_ joined #gluster
09:03 LebedevRI joined #gluster
09:15 mhulsman joined #gluster
09:16 mhulsman1 joined #gluster
09:18 Merlin__ joined #gluster
09:18 Merlin__ joined #gluster
09:21 Saravana_ joined #gluster
09:25 lalatenduM joined #gluster
09:32 LebedevRI left #gluster
09:34 gem_ joined #gluster
09:37 Merlin__ joined #gluster
09:38 nathwill joined #gluster
09:38 arcolife joined #gluster
09:38 Merlin__ joined #gluster
09:39 ivan_rossi joined #gluster
09:52 bluenemo joined #gluster
09:54 dlambrig_ joined #gluster
09:54 bluenemo JoeJulian, ndevos got a BIG problem now, alpha doesnt show all the files, omega server does, not split brain or other info, allfiles present on the clients that mount it
09:55 bluenemo If i put files on the master they dont get replicated
09:55 ndevos bluenemo: I understand you are now using the fuse mount?
09:56 bluenemo yes
09:56 ndevos bluenemo: there is not really a master/slave in that case, all bricks are pretty much equal
09:57 ndevos bluenemo: in the log on the client-side, you might see some details about disconnects or such
09:57 ndevos bluenemo: /var/log/glusterfs/path-to-mount-point.log would be the file you are looking for
09:59 bluenemo some folders are just not present on alpha
09:59 bluenemo when i toufh a  file on alpha, its not on omega
09:59 bluenemo no
09:59 bluenemo just got alpha and omega, two gluster servers, and web1-4, native fuse mount
09:59 bluenemo any ideas?
09:59 bluenemo stopping gluster on master and restarting doesnt help
09:59 bluenemo what logfile can i look into?
09:59 bluenemo how is this possible? gluster peer and volume info doesnt show anything :(
09:59 bluenemo ah for god sake..
09:59 bluenemo netstat shows that they are connected to both sides
10:00 bluenemo pretty much only that on the client ndevos http://paste.debian.net/hidden/a22bdf84/
10:00 glusterbot Title: Debian Pastezone (at paste.debian.net)
10:01 ndevos bluenemo: did you try the suggestion from the log: Please run 'gluster volume status' on server to see if brick process is running.
10:01 haomaiwa_ joined #gluster
10:02 ndevos bluenemo: also, when you touch a file on alpha, do you touch is through a gluster mount point?
10:02 mhulsman joined #gluster
10:02 bluenemo yes,  all running
10:02 ndevos bluenemo: directly touching a file in the directory of the bricks prevents gluster from tracking the operation, so it can not replicate it
10:02 bluenemo status and peer look ok
10:02 bluenemo when i touch on omega, it shows up on clients, when i do sth on alpha nothing happens at al
10:03 bluenemo there are folders missing on alpha as well
10:03 bluenemo from the data present before on "both servers"
10:03 bluenemo i dont get how this is possible
10:03 bluenemo its present on the brick @ alpha, as in /srv/gfs_fin_web/brick
10:03 ndevos bluenemo: you can trigger a self-heal on the missing directory or file by executing: stat /path/to/mount/point/file
10:04 bluenemo no, i didnt touch it in the brick, but the native mount on alpha
10:04 mhulsman joined #gluster
10:04 bluenemo no such file or directory
10:04 ndevos bluenemo: after that stat call, you should see something in the log of that mountpoint
10:04 bluenemo they dont sohw up in ls
10:04 ndevos bluenemo: and "gluster volume heal info" shows something?
10:05 bluenemo wtf
10:05 bluenemo it wasnt mounted
10:05 bluenemo and people put files in the normal dir
10:05 bluenemo wtf???
10:05 ndevos ah, that explains things :-/
10:05 bluenemo hmpf. thats also bad
10:05 Saravana_ joined #gluster
10:05 bluenemo people use the native gluster mount on alpha in /var/www for ftp uploading php code
10:06 bluenemo but this shouldnt have affected the clients
10:06 bluenemo hmmmmm
10:06 bluenemo thats a very unhappy behavior..
10:06 bluenemo i'm wondering if i mounted it again last night after i restarted it
10:08 pdrakeweb joined #gluster
10:10 bluenemo dude, that was just very shocking for me..
10:10 bluenemo thank you very much for your fast help ndevos
10:10 bluenemo thank god this wasnt a split brain.. ;)
10:11 ndevos bluenemo: I always like to use automounting, that should prevent most of the "mount failed after reboot" issues
10:12 RayTrace_ joined #gluster
10:14 bluenemo no i didnt reboot the machine, i just restarted gluster for a small upgrade last night. but the customer says that the files were present today and then they werent, so maybe the mount jumped away
10:14 Manikandan joined #gluster
10:15 ndevos bluenemo: oh, I dont know how the ubuntu updates are done, maybe it kills all the gluster processes (includes the fuse mounts)
10:16 bluenemo hm, last log line for the mount on alpha was http://paste.debian.net/333767/
10:16 glusterbot Title: debian Pastezone (at paste.debian.net)
10:16 bluenemo no i killed it before via pkill myself before the upgrade, JoeJulian suggested this
10:19 ndevos zoldar: file a bug :)
10:19 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
10:20 DV joined #gluster
10:21 bluenemo thats strange
10:21 bluenemo maybe I really just forgot to mount it again
10:21 bluenemo hm. might be possible..
10:21 bluenemo hm mounting with noatime..
10:21 bluenemo doesnt help now ;)
10:21 bkunal|training joined #gluster
10:23 vimal joined #gluster
10:23 bluenemo hm also got htis in my syslog [glusterfsd.c:1875:parse_cmdline] 0-glusterfs: obsolete option '--volfile-max-fetch-attempts or fetch-attempts' was provided
10:23 glusterbot bluenemo: ''s karma is now -3
10:23 bluenemo ah yes glusterbot , thank you
10:24 ndevos that poor
10:24 ndevos '
10:25 gildub_ joined #gluster
10:25 ndevos glusterbot: karma kshlm
10:25 glusterbot ndevos: Karma for "kshlm" has been increased 2 times and decreased 0 times for a total karma of 2.
10:27 ndevos bluenemo: the cleanup_and_exit in logs shows that the process exits, mostly instructed to do so by killing it from somewhere else
10:28 mhulsman1 joined #gluster
10:28 bluenemo yes, i think that was me during the upgrade
10:29 bluenemo well i guess it was my fault forgetting not to remount it..
10:39 bluenemo joined #gluster
10:40 DRoBeR joined #gluster
11:01 glusterbot` joined #gluster
11:01 haomaiwa_ joined #gluster
11:03 RedW joined #gluster
11:14 Norky joined #gluster
11:15 mhulsman joined #gluster
11:21 glusterbot joined #gluster
11:22 jiffin1 joined #gluster
11:24 bluenemo just noticed that i have nfs-common installed on the omega and alpha gluster servers.. I dont need that, do I?
11:27 Saravana_ joined #gluster
11:30 EinstCrazy joined #gluster
11:33 zhangjn joined #gluster
11:33 rjoseph joined #gluster
11:33 vmallika joined #gluster
11:37 Merlin__ joined #gluster
11:38 ira joined #gluster
11:41 jiffin joined #gluster
11:53 vmallika joined #gluster
11:55 gildub joined #gluster
12:01 haomaiwa_ joined #gluster
12:08 rafi1 joined #gluster
12:08 mlncn joined #gluster
12:12 Kenneth joined #gluster
12:13 julim joined #gluster
12:13 nishanth joined #gluster
12:18 ndarshan joined #gluster
12:21 neha_ joined #gluster
12:28 Merlin__ joined #gluster
12:29 Merlin___ joined #gluster
12:30 kkeithley1 joined #gluster
12:32 Mr_Psmith joined #gluster
12:39 mhulsman1 joined #gluster
12:44 kdhananjay1 joined #gluster
12:48 arcolife joined #gluster
12:50 DV_ joined #gluster
12:52 ctria joined #gluster
12:58 DV joined #gluster
12:58 chirino joined #gluster
12:59 Apeksha joined #gluster
13:01 haomaiwa_ joined #gluster
13:08 overclk joined #gluster
13:15 arcolife joined #gluster
13:15 kdhananjay joined #gluster
13:20 ppai joined #gluster
13:21 nbalacha joined #gluster
13:33 vimal joined #gluster
13:37 rafi joined #gluster
13:44 shyam joined #gluster
13:58 pppp joined #gluster
14:03 unclemarc joined #gluster
14:05 rjoseph joined #gluster
14:08 B21956 joined #gluster
14:08 overclk joined #gluster
14:15 haomaiwa_ joined #gluster
14:17 susant left #gluster
14:26 hagarth_ joined #gluster
14:31 Merlin__ joined #gluster
14:31 haomaiw__ joined #gluster
14:33 hamiller joined #gluster
14:39 chirino joined #gluster
14:40 jiffin joined #gluster
14:43 skylar joined #gluster
14:46 mhulsman joined #gluster
14:50 mhulsman1 joined #gluster
14:58 nishanth joined #gluster
15:01 haomaiwa_ joined #gluster
15:02 arcolife joined #gluster
15:03 shubhendu joined #gluster
15:03 dgandhi joined #gluster
15:03 Slashman joined #gluster
15:03 dgandhi joined #gluster
15:04 dgandhi joined #gluster
15:05 shyam joined #gluster
15:06 Chr1st1an joined #gluster
15:06 sghatty_ joined #gluster
15:08 scubacuda joined #gluster
15:08 kovshenin joined #gluster
15:09 trapier joined #gluster
15:10 skoduri joined #gluster
15:11 sakshi joined #gluster
15:13 plarsen joined #gluster
15:14 rjoseph joined #gluster
15:20 josh joined #gluster
15:25 lanning joined #gluster
15:30 bennyturns joined #gluster
15:32 harish joined #gluster
15:33 coredump joined #gluster
15:43 ayma joined #gluster
15:51 maserati joined #gluster
15:53 josh joined #gluster
16:04 cabillman joined #gluster
16:07 spalai joined #gluster
16:08 bennyturns joined #gluster
16:11 cholcombe joined #gluster
16:16 kdhananjay joined #gluster
16:16 kshlm joined #gluster
16:21 cfeller joined #gluster
16:23 gem_ joined #gluster
16:28 spalai joined #gluster
16:37 Merlin__ joined #gluster
16:38 trapier joined #gluster
16:41 Trefex joined #gluster
16:47 Kins joined #gluster
16:53 squizzi_ joined #gluster
16:54 rafi joined #gluster
16:55 dre_santos joined #gluster
16:55 dre_santos joined #gluster
16:57 skoduri joined #gluster
17:00 dre_santos Hi,
17:00 dre_santos I am needing some help here with Gluster.
17:00 dre_santos I have configured gluster with two nodes in replication.
17:00 dre_santos I am also acessing gluster volume throug those nodes.
17:00 dre_santos If both server are up I have no problem at all.
17:00 dre_santos If I shutdown server 2, and the perform a reboot on server 1, gluster will not start...
17:00 dre_santos In this conditions when I perform a gluster volume status, all I get is N/A for all parameters
17:01 dblack joined #gluster
17:04 JoeJulian Bring back server 2. It won't start a replicated volume with less than 50%+1 of that replica volume. You might be able to override that (I haven't tried) with "gluster volume start $vol force"
17:04 JoeJulian dre_santos: ^
17:04 dre_santos ;)
17:05 dre_santos with force it starts
17:05 dre_santos [root@atp-odoo2 ~]# gluster volume start gv0 force
17:05 dre_santos volume start: gv0: success
17:05 dre_santos Didn't know it would not start with less that 50%+1 of nodes available
17:05 JoeJulian It's new in 3.7
17:06 dre_santos is, my version is > 3.7 so it matches your info
17:06 dre_santos glusterfs-server-3.7.6-1.el6.x86_64
17:08 bluenemo joined #gluster
17:10 bluenemo hi JoeJulian, ndevos, I've just had a few seconds outage on two of my four webworkers, http://paste.debian.net/hidden/47478ec6/ this sounds a bit like a network problem with amazon to me. what do you think?
17:10 glusterbot Title: Debian Pastezone (at paste.debian.net)
17:10 JoeJulian It looks that way to me, too. Amazon has some pretty severe monitoring for network stability though, so I would be surprised.
17:11 bluenemo no, got those messages on all workers
17:11 bluenemo the monitoring system only reported two of the four, but i guess all were affected
17:11 JoeJulian If there's nothing in the brick logs that make it look like they stopped, then yes, I would blame network and open an outage ticket with them.
17:13 JoeJulian Get your nickle credit. :D
17:14 JoeJulian Oh, damn... I just did it again and used a non-international reference. A nickle is a US$0.05 coin.
17:14 bluenemo i dont get the reference :) Send you two pastes via PM
17:15 bluenemo nothing to heal in the heal info output either
17:16 bluenemo guess it was the network
17:16 bluenemo but very strange.. as both are in different availability zones (master & failover gfs server)
17:16 bluenemo and the workers are as well
17:18 bluenemo hm. monitoring says even ssh to the master and other nodes form the monitoring system failed
17:18 bluenemo i guess this sounds hardcore like network failure
17:18 JoeJulian Sure does
17:18 OregonGunslinger joined #gluster
17:19 bluenemo hm strange thing
17:19 bluenemo i like how gluster is still running though :)
17:21 bluenemo hm
17:21 bluenemo nothing in the aws report thingy here http://status.aws.amazon.com/
17:21 glusterbot Title: AWS Service Health Dashboard - Nov 19, 2015 PST (at status.aws.amazon.com)
17:21 shaunm joined #gluster
17:22 bluenemo btw thank you so much for responding so fast these days - it really means a lot
17:22 bluenemo you are so up for a couple of beers should you ever visit berlin :)
17:22 JoeJulian Happy to help
17:23 JoeJulian I have a link on my blog for donations to open source cancer research. I'd rather you donate to them.
17:25 bluenemo should the time the mount got back be in the clients mount log as well?
17:25 JoeJulian Probably. Within 3 seconds.
17:25 bluenemo ah ok. is there a way to donte for gluster developers?
17:26 bluenemo the last message i got is about when it came back again, but its a The message "W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-gfs_fin_web-client-0: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [Transport endpoint is not connected]" repeated 20 times between [2015-11-19 17:00:27.389734] and [2015-11-19 17:00:27.435793]
17:26 JoeJulian Most of them are employed by Red Hat so their donation scheme is buying RHEL.
17:26 bluenemo ah ok
17:26 bluenemo i'll check out the link on your website and send them some money!
17:26 JoeJulian But if you wanted to donate to cancer research on their behalf, I doubt any of them would be sad about it.
17:29 bluenemo hm. still reading logs. looks like the network went down on all 6 nodes in both availability zones at the same time. wtf?
17:29 bluenemo for about 45 seconds or sth
17:30 bluenemo yeah i will do that! :)
17:30 bluenemo its a great idea
17:31 mhulsman joined #gluster
17:33 bluenemo hm. this is so strange :D
17:34 bluenemo what else could cause such an outage JoeJulian? Do you have any other belly feeling than network?
17:35 bluenemo what would happen if the link between AZ A and B would go down for aws? so as in web worker 0 and 1 could still talk to gfs alpha in AZ A, and 10 and 11 to gfs omega, in AZ B?
17:35 bluenemo As far as I get it, a split brain would build up, right?
17:35 bluenemo but they shouldnt disconnect?
17:36 bluenemo split brain as in only when two files with same path are created on alpha and omega
17:36 monotek1 joined #gluster
17:36 bluenemo otherwise healing should fix that on next heal run or next access as far as i remember
17:36 JoeJulian Or are edited on both.
17:37 JoeJulian You can avoid that with quorum and an arbiter
17:38 bluenemo there wasnt a single mail send during this outage time, which is strange (on all 4 workers)
17:38 bluenemo quorum via including the workers for making decisions?
17:39 bluenemo what do you mean exactly with edited on both?
17:40 JoeJulian foo.txt is changed on both A and B during a netsplit event. When the network reconnects, they'll both have pending changes for each other but no way to resolve which edits go where.
17:40 bluenemo yeah, so its a split brain and it should list it in info split-brian, correct?
17:40 JoeJulian yes
17:40 bluenemo but they wont do anything, right? btw will they deliver the files for read?
17:40 bluenemo what happens if sbd writes to the file?
17:41 bluenemo they = gfs servers   deliver to = clients
17:43 bluenemo I could prevent this by scripting sth that stops gluster on omega when the connection to the relevant ports to alpha fail - then worst case, web10 and 11 would go down (the ones in omegas AZs - go down as in fly out of the load balancer, as they dont serve the test file anymore) and only web0 and 1 would remain to talk to alpha
17:44 bluenemo do you think that would be a good idea? or would you rather go with a cluster software stack to build a quorum?
17:44 ivan_rossi left #gluster
17:44 bluenemo i've did postgres replication with corosync pacemaker and cant say i liked it. what i do like is doing such stuff with saltstack, but i dont run this in production yet
17:44 mlhamburg1 joined #gluster
17:45 bluenemo ah i can see the outage in aws monitoring - 4 minutes
17:45 bluenemo hm
17:45 bluenemo thats a long time :/
17:45 bluenemo hm no the values are normalized
17:46 bluenemo lol. its shows me that at 4.59 there were 0.2 hosts online M)
17:46 bluenemo hm so it must have took around 3 minutes. still a lot..
17:47 bluenemo monitoring only detected one minute
17:48 kovshenin joined #gluster
17:49 chirino joined #gluster
17:52 kotreshhr joined #gluster
17:56 kotreshhr1 joined #gluster
17:56 kotreshhr1 left #gluster
17:56 mhulsman joined #gluster
18:00 kovshenin joined #gluster
18:01 bluenemo JoeJulian, I just found a couple of split-brain warning on the client-mount logs: http://paste.debian.net/hidden/dfad15fc/ info split-brain shows zero entries
18:01 glusterbot Title: Debian Pastezone (at paste.debian.net)
18:05 mhulsman joined #gluster
18:08 dlambrig_ joined #gluster
18:13 mlncn joined #gluster
18:13 bluenemo looked a bit more into the monitoring. shows that all 4 workers had high cpu utilization at the timeout. still no news on aws about network outage..
18:13 kovshenin joined #gluster
18:17 Rapture joined #gluster
18:18 scubacuda joined #gluster
18:18 EinstCrazy joined #gluster
18:24 dre_santos left #gluster
18:31 Chr1st1an joined #gluster
18:33 sghatty_ joined #gluster
18:39 nage joined #gluster
18:40 daMaestro joined #gluster
18:52 dgandhi joined #gluster
18:55 bluenemo JoeJulian, I just found some disturbing behavior.. when I echo fo > /var/www/testfile on web0 and then look on web11, it takes seconds before cat wont return no such file or directory
18:55 bluenemo same goes for removing files.. they are sometimes still present for some time
18:59 bluenemo maybe ndevos is around today? :)
19:00 Philambdo joined #gluster
19:06 bluenemo ah i think use-readdirp=no is missing
19:06 hagarth_ bluenemo: also try mounting with --entry-timeout=0 and --attr-timeout=0
19:07 Merlin__ joined #gluster
19:08 bluenemo hagarth_, thats my current fstab line for the clients: omega:/gfs_web/var/wwwglusterfsdefaults,fetch-attempts=10,nobootwait,use-readdirp=no,log-level=WARNING,log-file=/var/log/gluster-mount.log0 0
19:08 hagarth_ bluenemo: add those two without the hyphens
19:10 skylar1 joined #gluster
19:11 bluenemo hagarth_, what exactly do those do?
19:11 bluenemo the manual only shows one sentence :(
19:11 bluenemo do you still recommend use-readdirp=no?
19:12 bluenemo what about direct-io-mode=? its also listed in http://www.gluster.org/community/documentation/index.php/Setting_Up_Clients
19:12 hagarth_ bluenemo: entry-timeout=0 instructs the fuse kernel module to not cache any entries, attribute-timeout=0 instructs it to not cache attributes.
19:13 bluenemo so for every read it will ask the gluster server?
19:13 hagarth_ bluenemo: use-readdirp=no can also help for more consistency.
19:13 bluenemo otherwise it will chache reads for one second right?
19:13 hagarth_ bluenemo: that is right, it is a performance v/s consistency tradeoff.
19:13 PaulCuzner joined #gluster
19:13 bluenemo hm
19:14 bluenemo i'm running php..
19:14 hagarth_ bluenemo: perhaps you could leave attribute-timeout as is
19:14 hagarth_ entry-timeout should matter for the problem you mentioned
19:15 bluenemo i dont get readdirp completely .. google is confusing. do you know what it does?
19:15 bluenemo hagarth_, i guess so, yes. also it was about 5 or more seconds that the file wasnt deleted on the other clients
19:16 hagarth_ readdirp fetches attributes and extended attributes in addition to entires
19:16 bluenemo so propagating deletion also seems to lag a bit
19:16 bluenemo attributes as in rwx? or as in immutable?
19:17 hagarth_ bluenemo: rwx, information that you see in the o/p of stat etc.
19:18 bluenemo uh. i need that :D direct-io-mode also sounds interesting
19:18 rjoseph joined #gluster
19:21 hagarth_ bluenemo: is there a php benchmarking tool that one can use?
19:21 coredump joined #gluster
19:27 bluenemo hagarth_, i got something similar. hm. to me it seems it is caching longer than one second
19:27 hagarth_ bluenemo: that seems odd.. are all your servers & clients in sync with respect to time?
19:28 bluenemo yes, just checked via salt - lets you run commands on all of them at the same time. to the second
19:32 bluenemo hm no, doesnt work :(
19:32 bluenemo basically same as before
19:32 bluenemo hmpf..
19:34 bluenemo hagarth_, do you think i should just try directio? what was the gluster command to show all currently set options again, including defaults?
19:35 bluenemo if it really were one second.. feels like 3 or something and for every access to the file via rm, ls or touch, the response is then cached again for 3 sec or sth
19:35 hagarth_ bluenemo: you could try that .. mount options do not get listed in gluster volume get or volume set help
19:36 bluenemo ah yes, thanks
19:37 bluenemo hm. i'm quite sure i have to tweak sth there. hope JoeJulian has a second :)
19:39 bluenemo hagarth_, what puzzels me is that `mount` doesnt show more than  alpha:/gfs_web on /var/www type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)   is there another command to show the fuse options as well?
19:39 bluenemo for example the nobootwait is missing
19:40 JoeJulian nobootwait isn't a mount option. It's an init script option.
19:41 bluenemo ah ok, makes sense. do you want me to cut the text above short?
19:41 rafi joined #gluster
19:46 bluenemo JoeJulian, do you have an idea on the to long cache problem? I think I could live with a second but i just experienced 10 in a test and thats to much :(
19:47 bluenemo same behavior doesnt apply between the servers however
19:47 JoeJulian I've got a meeting coming up shortly that I've got to figure out how to attend (Lync? Seriously?) so I'm a bit unavailable for a while.
19:47 ir8 left #gluster
19:48 bluenemo can you give me a short hint on this one?
19:49 bluenemo do you think i should enable direct-io?
19:49 wushudoin joined #gluster
19:50 JoeJulian I didn't scroll back and read what the problem is. Typically I just suggest trying things and letting me know if they work.
19:51 bluenemo JoeJulian, when i touch, rm or ls a file on the clients, it sometimes takes up to 10 seconds to be propagated
19:51 brake2late joined #gluster
19:52 bluenemo it feels to me that the response is somehow cached as well. also sync is faster between the nodes having two connections to a server than to the "failover" one
19:53 virusuy Hi guys, im planning an upgrade from 3.6.4 to 3.7.1, anything i should know before the upgrade ?
19:56 brake2late Hi all, first time to gluster on irc.  Question, is there a log for the healing process?  I got a bunch of entries from the volume heal info output. I want to find out it Gluster is actually doing any healing.  Thanks in advance.
19:57 bluenemo JoeJulian, I tried  entry-timeout=0,attribute-timeout=0,direct-io-mode=disable  in fstab for the clients without effect for trying to completely disable caching
19:57 bluenemo (after editing mount -a)
19:59 virusuy brake2late:  yes, should be located in /var/log/gluster and it's called glustershd.log
20:00 brake2late thanks virusuy
20:00 virusuy brake2late:  you're welcome :)
20:03 virusuy Im planning an upgrade from 3.6.4 to 3.7.1, anything i should know before the upgrade ?
20:04 bluenemo when accessed by watch -n 1 cat /var/www/testfile, it just took 25 seconds for a change to a file to update to another node
20:05 Merlin__ joined #gluster
20:06 bluenemo takes about the same time when I edit it on the server
20:06 kovshenin joined #gluster
20:07 bluenemo so I guess the goal is to completely disable client side caching.. or at least put it down to a second or so
20:09 mhulsman joined #gluster
20:09 bluenemo same doesnt apply to server - server
20:12 bluenemo JoeJulian, if I cant fix this i will have to switch to nfs :(
20:31 abyss^ JoeJulian: I've upgrade to glusterf 3.6 (unfortunatelly there's no newer package for debian wheezy) and I get: failed to get the 'volume file' from server. This solution to performe: glusterd --xlator-option *.upgrade=on -N is completely safe and OK?
20:35 abyss^ nevermind the command didn't help :(
20:38 jwd joined #gluster
20:38 abyss^ oh ok, it helped but I had to turn off glusterfs :)
20:53 ayma1 joined #gluster
20:58 Merlin__ joined #gluster
21:06 Merlin__ joined #gluster
21:11 EinstCrazy joined #gluster
21:13 kkeithley_ you'll be happy to know that with http://review.gluster.org/12518 we'll be able to build 3.7 on older systems like wheezy, trusty, and {RHEL,CentOS}5.
21:13 glusterbot Title: Gerrit Code Review (at review.gluster.org)
21:17 abyss^ kkeithley_: yes, it would be nice:) However I need upgrade my debians :) But it will happen in years;). Thank you for info
21:20 timotheus1 joined #gluster
21:22 dlambrig_ joined #gluster
21:35 Merlin__ joined #gluster
21:37 Peppard joined #gluster
21:40 tomatto joined #gluster
21:41 dlambrig_ joined #gluster
21:41 Merlin__ joined #gluster
21:45 hagarth_ joined #gluster
21:47 jwd joined #gluster
21:50 _Bryan_ joined #gluster
21:53 cholcombe joined #gluster
21:54 mlncn joined #gluster
22:00 Merlin__ joined #gluster
22:14 jwaibel joined #gluster
22:19 mlhamburg1 joined #gluster
22:19 xavih joined #gluster
22:24 gildub joined #gluster
22:52 cyberbootje joined #gluster
23:02 coredump joined #gluster
23:08 Merlin__ joined #gluster
23:11 VeggieMeat joined #gluster
23:17 mlhamburg1 joined #gluster
23:20 ghenry joined #gluster
23:38 zhangjn joined #gluster
23:39 zhangjn joined #gluster
23:40 zhangjn joined #gluster
23:43 dlambrig_ joined #gluster
23:44 primehaxor joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary