Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 kminooie hold on apparently even after service glusterfs-server stop I still have multiple glusterfs process running  ...
00:00 JoeJulian yes
00:02 vincent_vdk joined #gluster
00:09 plarsen joined #gluster
00:10 badone__ joined #gluster
00:23 vincent_vdk joined #gluster
00:27 kminooie http://ur1.ca/jrp0v  so the brick specific ones should die too, right?
00:28 JoeJulian right
00:29 JoeJulian pkill -f glusterfs
00:30 kminooie even thou the last line in  etc-glusterfs-glusterd.vol.log is [2015-02-21 00:18:30.535605] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (15), shutting down
00:30 glusterbot kminooie: ('s karma is now -59
00:31 kminooie that line if from after the service stop , but they are still running ( I can kill them but why they don't die? )
00:31 JoeJulian @processes
00:31 glusterbot JoeJulian: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
00:31 kminooie sorry glusterbot :)
00:32 JoeJulian So when you stop the management daemon you may not want to interrupt the brick processes.
00:33 JoeJulian It also means that ( is apparently an a-hole who gathers a lot of negative karma.
00:33 kminooie implying that they are still doing something?  but they are all sleep
00:33 kminooie :)
00:35 sage joined #gluster
00:36 kminooie ok I killed everything, removed the symlink, mount bind an empty dir ( in place of symlink ) to the location in /opt. now lets see what happens
00:39 T3 joined #gluster
00:39 kminooie http://ur1.ca/jrp3m   seems it's working
00:40 JoeJulian Heh, cool. Never tried that before.
00:41 kminooie but 2 questions 1- why am i getting all those warning   and 2- why does it say that port on brick-1 is N/A  ( because brick-1 is another peer? )
00:41 JoeJulian Don't forget to get that in fstab.
00:41 kminooie yup thanks man, I wouldn't think of that ( bind vs. symlink )
00:42 JoeJulian It's possible that the service it was trying to connect to hadn't started yet at that point.
00:42 JoeJulian And I don't have an answer on the port question.
00:43 kminooie :) never the less you've been a life saver
00:43 JoeJulian Glad I could help.
01:12 anrao joined #gluster
01:18 bala joined #gluster
01:25 elitecoder joined #gluster
01:26 elitecoder I have two bricks, simple replicas. It appears they've lost files. I've ran volume heal XYZname on both
01:26 elitecoder The files haven't shown back up, and I don't know what do to about it
01:27 JoeJulian Are the files actually on the bricks?
01:27 elitecoder Two days ago I rebooted the servers, one - waited a few minutes, and then the other.
01:28 elitecoder JoeJulian: I don't know what you mean by that.
01:28 elitecoder A web application wrote files, saved the names to the database. Now they're gone.
01:28 elitecoder The names don't get saved to the database unless the write was successful
01:31 elitecoder A co-worker confirmed a logo that was once there, is now gone.
01:31 elitecoder The only way to remove the files is to replace them via the web interface.
01:32 JoeJulian files = files that are known to be missing. bricks = the backend storage for glusterfs.
01:32 JoeJulian I'm not sure how else to ask it.
01:32 elitecoder Well, I go to the mount point, look for them by ls'ing for the filename and it's not there.
01:32 elitecoder The webserver returns a 404
01:33 elitecoder So they're not showing up through the mount point
01:33 elitecoder whether they're in some magical glusterfs area, I don't know
01:33 JoeJulian When you say, "I go to the mount point" I assume you're referring to the glusterfs client mount point.
01:33 elitecoder yup
01:33 JoeJulian There's nothing magical about it.
01:34 JoeJulian @glossary
01:34 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
01:34 JoeJulian ~pasteinfo | elitecoder
01:34 glusterbot elitecoder: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
01:34 elitecoder That all makes sense
01:37 elitecoder http://fpaste.org/188451/24482623/
01:37 JoeJulian What's an example file that's missing?
01:38 PeterA joined #gluster
01:38 PeterA anyone experience gluster NFS drawed CPU on the client??
01:39 JoeJulian Not unless the client is also a server.
01:40 elitecoder JoeJulian: /files/htdocs/identity/haX0wA7M8Cu​O176v8MtJPLc4R2369idqb6PDnASk.png
01:41 JoeJulian elitecoder: Ok, and I'm assuming the client mountpoint is /files?
01:42 elitecoder On the client, it's /mnt/gluster/
01:42 elitecoder so add /files to that
01:42 JoeJulian Ok, so check both servers to see if they have, "/mnt/gluster/brick//files/htdocs/identity/h​aX0wA7M8CuO176v8MtJPLc4R2369idqb6PDnASk.png"
01:43 JoeJulian meh, doubled up the /'s, but it'll work anyway.
01:43 elitecoder righto
01:43 kminooie elitecoder: also you might wana run 'gluster volume status files detail' and paste the ouput
01:44 kminooie I meant fpaste the output. don't paste it here :D
01:45 elitecoder naturally you darling
01:46 elitecoder JoeJulian: No such file or directory on both servers
01:46 JoeJulian what version is this?
01:47 elitecoder glusterfs --version
01:47 elitecoder glusterfs 3.5.2 built on Aug  6 2014 19:33:46
01:47 JoeJulian Well that eliminates that idea...
01:49 JoeJulian Ok, the software doesn't delete a file unless it's processing a posix command to do so. There's no other state that deletes a file.
01:49 JoeJulian Even overwriting it with your software wouldn't delete it. Worst case it would be truncated.
01:50 elitecoder I rebooted both on the 18th, on the 19th someone started noticing missing files
01:50 JoeJulian What filesystem is on /mnt/gluster/brick?
01:50 elitecoder Could that have anything to do with it? I think this has happened before
01:51 elitecoder uh
01:51 elitecoder trying to find out
01:52 elitecoder Are you asking /dev/xvdb1 /mnt/gluster xfs defaults 0 0
01:52 elitecoder Oops
01:52 JoeJulian yep
01:52 elitecoder fstab says xfs
01:53 JoeJulian There are states where xfs can drop a file that wasn't written to the journal, but the odds of that happening on both servers rebooted separately is not statistically possible.
01:54 JoeJulian Plus, it would require the power plug to be pulled, not a normal shutdown process.
01:54 elitecoder After rebooting, the cpu usage goes pretty high, it's gluster using it. Is that self healing or what is that?
01:55 JoeJulian Yes, that is what does that, plus re-establishing any locks and open fds.
01:55 elitecoder Could I have rebooted the second one too soon maybe?
01:56 JoeJulian Maybe, but then your worst case should be split-brain. Only way to cure that is to delete the file from the /mnt/gluster by hand.
01:56 elitecoder Heh
01:57 elitecoder Next time I reboot, I'll see if I can take an image lol
01:57 JoeJulian "On the client, it's /mnt/gluster/"... you didn't mount the client to that directory on either server, did you?
01:57 elitecoder /mnt/gluster/brick/ on the servers, and on the clients ... checking
01:58 JoeJulian Yeah, not really concerned with the clients.
01:58 elitecoder oh ok
01:58 JoeJulian Just wanted to make sure there was no "mount -t glusterfs /mnt/gluster" when your bricks are also under /mnt/gluster" People do that and wonder why things lock up.
02:00 elitecoder Haha
02:00 elitecoder This has been operational for 5 months about
02:00 JoeJulian The only other possibility I can think of is that the missing files are on the root filesystem, covered up by /mnt/gluster. That would require the root filesystem's /mnt/gluster/brick directory to exist and for it to have the volume-id extended attributes, which seems unlikely.
02:01 kminooie thanks again JoeJulian I am outta here :D have a good weekend
02:01 JoeJulian You too
02:03 elitecoder ok thanks JoeJulian
02:03 elitecoder I'm going to do something fun now, and pick this back up on monday
02:03 JoeJulian Me too. later
02:04 elitecoder Enjoy your weekend :]
02:04 elitecoder I'll probably be seeing you all on monday lol
02:04 elitecoder bai
02:09 badone__ joined #gluster
02:28 ildefonso joined #gluster
02:40 T3 joined #gluster
02:56 hagarth joined #gluster
03:21 ildefonso joined #gluster
03:26 rastar_afk joined #gluster
03:26 sac`away joined #gluster
03:28 hchiramm joined #gluster
03:48 Fetch left #gluster
03:55 T3 joined #gluster
04:03 hagarth joined #gluster
04:10 anrao joined #gluster
04:25 hagarth joined #gluster
04:29 shubhendu joined #gluster
04:43 hagarth joined #gluster
04:58 T3 joined #gluster
05:05 hagarth joined #gluster
05:25 maveric_amitc_ joined #gluster
05:36 hagarth joined #gluster
05:56 shubhendu joined #gluster
06:47 T3 joined #gluster
07:01 rjoseph joined #gluster
07:03 kovshenin joined #gluster
07:11 hchiramm joined #gluster
07:20 hchiramm joined #gluster
07:21 sac`away joined #gluster
07:21 hagarth joined #gluster
07:27 TvL2386 joined #gluster
07:38 lalatenduM joined #gluster
07:41 bala joined #gluster
07:44 social joined #gluster
08:22 rotbeard joined #gluster
08:34 LebedevRI joined #gluster
08:35 T3 joined #gluster
08:42 jvandewege_ joined #gluster
08:46 maveric_amitc_ joined #gluster
08:48 inodb joined #gluster
08:51 shubhendu joined #gluster
08:51 shubhendu_ joined #gluster
08:52 jvandewege_ joined #gluster
09:03 bala joined #gluster
09:33 deniszh joined #gluster
09:45 kbyrne joined #gluster
09:51 T3 joined #gluster
10:29 ekuric joined #gluster
11:22 T3 joined #gluster
11:23 bala joined #gluster
11:42 inodb joined #gluster
11:49 awerner joined #gluster
12:01 tuxcrafter joined #gluster
12:01 tuxcrafter hi al, im trying to build a glusterfs cluster for testing
12:02 tuxcrafter i want to use debian testing with glusterfs 3.6 to run kvm on top of it
12:02 tuxcrafter thinking about a three server setup
12:02 tuxcrafter testing in kvm vms
12:03 tuxcrafter but should i use ext4 or xfs
12:04 tuxcrafter and the documentation states glusterfs Gluster does not support so called “structured data”, meaning live, SQL databases
12:04 tuxcrafter what does that mean for me if i want to run kvm guests on top of it
12:04 tuxcrafter does that mean its not smart to use glusterfs if i want to run guests with lots of database access?
12:04 tuxcrafter should i use drbd instead
12:07 uebera|| joined #gluster
12:30 tuxcrafter please highlight my nick
12:38 maveric_amitc_ joined #gluster
13:02 papamoose joined #gluster
13:10 harish joined #gluster
13:10 diegows joined #gluster
13:10 T3 joined #gluster
13:11 inodb joined #gluster
13:23 harish joined #gluster
13:27 bennyturns joined #gluster
13:32 bennyturns joined #gluster
13:37 shubhendu joined #gluster
13:49 T3 joined #gluster
14:14 chirino joined #gluster
14:27 squizzi joined #gluster
14:37 plarsen joined #gluster
14:50 inodb joined #gluster
14:55 kaushal_ joined #gluster
15:12 lalatenduM joined #gluster
16:01 bit4man joined #gluster
16:08 anoopcs joined #gluster
16:11 jiffin joined #gluster
16:17 haomaiwang joined #gluster
16:23 elico joined #gluster
16:28 bala joined #gluster
16:40 gem joined #gluster
16:46 soumya joined #gluster
17:14 MacWinner joined #gluster
17:28 T3 sort-of-off-topic: what tools are you guys to monitor system resources (cpu, memory, network) trends on your systems?
17:37 wkf joined #gluster
17:40 ndevos T3: I use Zabbix, and still want to check if https://github.com/htaira/glubix makes monitoring Gluster easy
17:41 kbyrne joined #gluster
17:52 ricky-ticky joined #gluster
17:58 shubhendu joined #gluster
18:42 maveric_amitc_ joined #gluster
18:43 bennyturns joined #gluster
18:43 Sal joined #gluster
18:45 Guest53088 hi guys
19:08 maveric_amitc_ joined #gluster
19:09 PorcoAranha joined #gluster
19:34 Intensity joined #gluster
19:41 T3 cool, ndevos. I'll check it out ;)
19:49 tuxcrafter i was at fosdem 2015 and i thought i saw a presentation with three node glusterfs for kvm
19:49 tuxcrafter so im trying to make one right now
19:50 tuxcrafter but i cant find how to create a gluster volume for three nodes
19:50 tuxcrafter im runnin gglusterfs 3.6.2 built on Jan 21 2015 14:23:41
20:04 tuxcrafter http://www.gluster.org/community/documenta​tion/index.php/Libgfapi_with_qemu_libvirt < im reading htat
21:01 toxic_apple_pie joined #gluster
21:04 PorcoAranha joined #gluster
21:23 bennyturns joined #gluster
21:38 elico joined #gluster
21:52 R0ok_ joined #gluster
22:13 basso left #gluster
22:15 ipengineer joined #gluster
22:18 ipengineer I am trying to setup a three node environment where all three nodes are located at a different physical location, connected via a vpn. Each of these nodes will have local clients that will need to have read/write access. Can gluster do this and keep everything in sync? So if user writes a new file on node1 it will be visible to users on node2?
22:29 badone__ joined #gluster
22:30 plarsen joined #gluster
22:36 asku joined #gluster
22:37 R0ok_ joined #gluster
22:37 jbrooks joined #gluster
22:52 PaulCuzner joined #gluster
22:56 ipengineer joined #gluster
22:58 T3 joined #gluster
23:19 stickyboy joined #gluster
23:21 Andreas-IPO joined #gluster
23:21 _NiC joined #gluster
23:21 masterzen joined #gluster
23:27 R0ok_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary