Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 jporterfield joined #gluster
00:26 jporterfield joined #gluster
00:39 mkzero joined #gluster
00:46 jporterfield joined #gluster
00:51 daMaestro joined #gluster
01:10 psyl0n joined #gluster
01:42 jporterfield joined #gluster
01:46 harish joined #gluster
01:48 jporterfield joined #gluster
02:07 andreask joined #gluster
02:23 harish joined #gluster
02:51 jporterfield joined #gluster
03:10 kshlm joined #gluster
03:38 RameshN joined #gluster
03:45 itisravi joined #gluster
03:48 kanagaraj joined #gluster
03:48 shyam joined #gluster
04:08 ndarshan joined #gluster
04:18 jporterfield joined #gluster
04:25 bala joined #gluster
04:30 jporterfield joined #gluster
04:35 rjoseph joined #gluster
04:40 ppai joined #gluster
04:44 vpshastry1 joined #gluster
04:48 ajha joined #gluster
04:49 dusmant joined #gluster
04:50 dusmant joined #gluster
04:50 MiteshShah joined #gluster
04:52 dusmant joined #gluster
04:52 ababu joined #gluster
05:14 davinder joined #gluster
05:24 MiteshShah joined #gluster
05:32 hagarth joined #gluster
05:36 dhyan joined #gluster
05:49 MiteshShah joined #gluster
06:02 stickyboy joined #gluster
06:05 satheesh joined #gluster
06:13 jporterfield joined #gluster
06:17 zeittunnel joined #gluster
06:22 CheRi joined #gluster
06:22 lalatenduM joined #gluster
06:28 overclk joined #gluster
06:31 gr33dy joined #gluster
06:38 jporterfield joined #gluster
06:48 jporterfield joined #gluster
07:02 mohankumar__ joined #gluster
07:03 92AAAMCJH joined #gluster
07:04 dusmant joined #gluster
07:09 meghanam joined #gluster
07:09 meghanam_ joined #gluster
07:16 XATRIX joined #gluster
07:16 XATRIX Hi guys, how can i start glusterfsd (Centos 6.5)
07:16 XATRIX I don't want to edit config file
07:16 XATRIX Simply to start daemon and configure it via CLI
07:16 XATRIX [root@vox1-ua ~]# glusterfsd --debug -N
07:16 XATRIX [2013-12-30 07:15:49.068555] C [glusterfsd.c:1406:parse_cmdline] 0-glusterfs: ERROR: parsing the volfile failed (No such file or directory)
07:17 samppah XATRIX: glusterd is the name of the daemon you want to start
07:18 XATRIX O_o
07:19 XATRIX samppah: http://ur1.ca/ga7z4
07:19 glusterbot Title: #64923 Fedora Project Pastebin (at ur1.ca)
07:19 jtux joined #gluster
07:19 XATRIX Is it ok
07:19 XATRIX ?
07:20 samppah XATRIX: sounds like it's shutting down.. but i'm not familiar with that error message
07:21 samppah does it shutdown with service glusterd start ?
07:21 XATRIX No no, i did CTRL+C
07:22 samppah aaha
07:22 XATRIX You should see symbol ^C
07:22 XATRIX At the end
07:22 samppah true
07:22 XATRIX But , was the start ok ?
07:23 samppah quite... [2013-12-30 07:18:45.208545] E [store.c:394:gf_store_handle_retrieve] 0-: Unable to retrieve store handle /var/lib/glusterd/glusterd.info, error: No such file or directory
07:23 samppah not sure about those..
07:27 XATRIX Possibly the first start, it was creating appropriate directories and files
07:27 XATRIX One more question
07:28 XATRIX I have a storage partition with ext4
07:29 XATRIX /dev/md0 , and it already contains files and folders
07:29 XATRIX If i firstly mount it to /storage
07:29 XATRIX And later on, i'll do bricks setup, and mount the gluster-share to /mnt
07:29 XATRIX /mnt - will be empty
07:30 XATRIX How can i create/mount the gluster share which will contain all the data i have already on my disk ?
07:32 samppah ie. if you have file named /storage/file.txt and you do stat /mnt/file.txt.. it should come visible in gluster mountpoint
07:32 samppah but.. there seems to be some gotchas with folders
07:32 samppah i'm not sure how to deal with them
07:34 XATRIX ok, maybe there's a way to get visible all the files i already have on disk ?
07:34 XATRIX disk = /storage
07:35 XATRIX /dev/mapper/mdraid1--disk-mdraid1--disk--storage on /storage type ext4 (rw,relatime,errors=remount-ro)
07:35 samppah find /mnt might do it
07:35 XATRIX Also do i have to configure any extra mount options for mount.glusterfs ?
07:35 samppah not necessary
07:35 XATRIX Maybe noatime , relatime or something else ?
07:35 XATRIX Ok
07:41 XATRIX [root@vox1-ua storage]# gluster volume create datastorage replica 2 transport tcp vox1-ua:/storage vox2-ua:/storage
07:41 XATRIX volume create: datastorage: failed: /storage or a prefix of it is already part of a volume
07:41 glusterbot XATRIX: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
07:41 XATRIX ok
07:50 XATRIX samppah: any ideas what can i do to fix ?
07:51 XATRIX http://ur1.ca/ga852
07:51 glusterbot Title: #64926 Fedora Project Pastebin (at ur1.ca)
07:51 ctria joined #gluster
07:51 hagarth XATRIX: clear the extended attributes on both nodes
07:51 hagarth and then attempt re-creating the volume
07:54 samppah anything in the log files for first error message?
07:54 samppah "volume create: datastorage: failed"
07:56 XATRIX both nodes! you right!
07:56 ekuric joined #gluster
07:56 XATRIX samppah: nothing in logs
07:57 XATRIX Yes, you right... That fixed the error
07:59 shyam joined #gluster
08:21 vimal joined #gluster
08:30 benjamin_ joined #gluster
08:32 pk joined #gluster
08:32 ndarshan joined #gluster
08:37 andreask joined #gluster
08:39 getup- joined #gluster
08:58 harish joined #gluster
09:05 mgebbe_ joined #gluster
09:05 mgebbe_ joined #gluster
09:05 glusterbot New news from newglusterbugs: [Bug 1040844] glusterd process crashed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1040844>
09:22 vpshastry joined #gluster
09:37 dusmant joined #gluster
09:45 satheesh joined #gluster
09:46 simon_ joined #gluster
09:51 benjamin_ joined #gluster
09:58 ngoswami joined #gluster
10:05 mohankumar__ joined #gluster
10:08 RedShift joined #gluster
10:14 dusmant joined #gluster
10:34 benjamin_ joined #gluster
10:47 verdurin joined #gluster
11:10 bala joined #gluster
11:16 barnes left #gluster
11:16 RameshN joined #gluster
11:20 psyl0n joined #gluster
11:32 calum_ joined #gluster
11:40 tru_tru joined #gluster
11:47 jporterfield joined #gluster
11:48 shyam joined #gluster
11:53 tru_tru joined #gluster
11:56 pk left #gluster
12:00 rotbeard joined #gluster
12:01 ofu___ joined #gluster
12:02 psyl0n_ joined #gluster
12:03 fyxim_ joined #gluster
12:03 k4nar_ joined #gluster
12:03 natgeorg joined #gluster
12:05 pafkor joined #gluster
12:05 saltsa_ joined #gluster
12:05 twx_ joined #gluster
12:05 samppah_ joined #gluster
12:05 Alex___ joined #gluster
12:05 l0uis_ joined #gluster
12:05 l0uis_ joined #gluster
12:05 bp___ joined #gluster
12:05 jporterfield_ joined #gluster
12:05 tg2_ joined #gluster
12:05 haakon___ joined #gluster
12:06 wgao_ joined #gluster
12:07 NuxRo joined #gluster
12:07 mkzero joined #gluster
12:07 AndreyGrebenniko joined #gluster
12:07 kanagaraj_ joined #gluster
12:08 qdk joined #gluster
12:22 samppah_ @puppet
12:22 glusterbot samppah_: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
12:40 davidbierce joined #gluster
12:42 bala joined #gluster
13:03 bala joined #gluster
13:22 dhyan joined #gluster
13:49 davidbierce joined #gluster
14:00 jporterfield joined #gluster
14:00 harish_ joined #gluster
14:08 bala joined #gluster
14:12 dneary joined #gluster
14:22 theron joined #gluster
14:24 harish__ joined #gluster
14:40 jag3773 joined #gluster
14:44 dhyan joined #gluster
14:45 chirino joined #gluster
14:50 davidbierce joined #gluster
14:53 nocturn joined #gluster
14:55 dbruhn joined #gluster
15:14 diegows joined #gluster
15:15 andreask joined #gluster
15:15 zerick joined #gluster
15:40 asku joined #gluster
15:43 psyl0n joined #gluster
15:47 psyl0n joined #gluster
15:48 psyl0n joined #gluster
15:52 _br_ joined #gluster
15:54 tryggvil joined #gluster
15:55 satheesh joined #gluster
16:00 vpshastry joined #gluster
16:02 daMaestro joined #gluster
16:06 vpshastry left #gluster
16:10 psyl0n joined #gluster
16:19 xavih joined #gluster
16:20 jbrooks joined #gluster
16:36 ctr7 joined #gluster
16:54 uebera|| joined #gluster
16:54 uebera|| joined #gluster
16:58 aixsyd joined #gluster
16:58 aixsyd Men!
16:59 dhyan joined #gluster
17:01 Onoz joined #gluster
17:01 hagarth joined #gluster
17:01 aixsyd Thoughts for testing purposes - what sort of failures are common with GlusterFS that I should know about, test about, and write up how-to-fix docs for?
17:09 pravka joined #gluster
17:22 dhyan joined #gluster
17:26 purpleidea samppah_: did i see the bat signal?
17:31 samppah_ purpleidea: everything is fine in gotham city :)
17:33 samppah_ aixsyd: split brain is probably the most common failure
17:33 samppah_ err.. issue
17:34 samppah_ @splitbrain
17:34 glusterbot samppah_: To heal split-brain in 3.3+, see http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ .
17:38 mohankumar__ joined #gluster
18:03 andreask joined #gluster
18:05 jobewan joined #gluster
18:17 jostin joined #gluster
18:18 jostin hi all, i have a very crazy situation with gluster 4 nodes cluster maybe anyway can help with that
18:20 jostin basicly there is a glusterfsd proccess that take all the cpu's for hours also on his replica pair and its not serving files, logs shows nothing special and the rest of the glusterfsd daemons take arround 40% cpu time(all of them)
18:20 jostin very crazy
18:20 jostin gluster is going med
18:23 jostin on one server just 1 glusterfsd is running high cpu and on other node all glusterfsd is 40% avg when they are not buisy at all
18:23 jostin for hours without change
18:30 Knyaz joined #gluster
18:31 Knyaz hi guys!
18:32 Knyaz I have a bit of problem with Gluster installed on RedHat 6.3
18:32 Knyaz root drive is filling up and I don't see any big log files
18:33 Knyaz only few gb's left, any suggestion why this is hapening ?
18:33 Knyaz OS is installed on 30 GB partition and 2 x 500 GB disks with it, using them as brick 1 and brick 2
18:34 Knyaz this setup is on two server with same configurations
18:34 Knyaz 30 GB OS drive and 2 x 500 GB bricks on two servers
18:35 Knyaz "/var/log/glusterfs is only 1.1 M"
18:46 dhyan joined #gluster
18:47 Knyaz Hello .. newbie here :)
18:48 Alex You should probably check disk usage across the rest of your root filesystem - the rest of /var, for instance. You can be lazy and do something like 'du -sh /var/*' and drill down from there, too
18:49 polfilm joined #gluster
18:49 saltsa joined #gluster
18:49 Alex something like gdmap - http://gdmap.sourceforge.net/ - might give you a bit more info
18:49 glusterbot Title: GD Map - A tool to visualize disk space (at gdmap.sourceforge.net)
18:50 Alex That way you can work out whether Gluster is at all responsible
18:54 saltsa joined #gluster
18:56 thogue joined #gluster
18:56 Knyaz @Alex the total size of /var folder is around 273 MB only
18:56 Knyaz total size of OS disk usage is not more then 8 GB if I check directory by directory
18:56 Alex Yep, I'm not going to fix this issue for you, just trying to explain the kind of tools you can use to investigate disk usage. It may be something as simple as you've forgotten to actually mount the 500GB disks before using the paths for bricks :)
18:57 Alex It could also be processes which had large logfiles that have now been deleted, but the files haven't been closed - so the OS thinks it's still used.
18:57 Knyaz yes I think this is the problem
18:57 Knyaz I can see very large files are opend by glusterfsd which are not written on disk
18:58 Knyaz @just a quick question, If I restart glusterd .what will happen ???
18:58 Knyaz will applications will crash or lose any data connectivity ?
18:58 Knyaz 2 x servers with same configuraitons
18:59 Knyaz first server is giving this problem .. second is fine .. just 6 GB usage on OS partition
19:23 polfilm joined #gluster
19:31 zmotok joined #gluster
19:32 jablonski joined #gluster
19:34 jablonski dear all, I would like to know if it's possible to go from a replicated-distributed volume to a pure replicated one (when adding some new bricks to the volume, it was automatically converted to a replicated-distributed one)? Gluster 3.2.4 on an ScientificLinux 6.1 X86_64, thank you for your time!
19:38 glusterbot New news from newglusterbugs: [Bug 1047378] Request for XML output ignored when stdin is not a tty <https://bugzilla.redhat.co​m/show_bug.cgi?id=1047378>
19:46 jostin anyone knows why all the cpus are on 70% for 2 days without much traffic(almost none)
19:47 psyl0n joined #gluster
19:48 rotbeard jostin, do you use an older gluster version? it's the autoheal, maybe ;)
19:51 jostin its not autoheal or im not sure how to test it or how to stop it. there was no reboot or server rebooting etc..
19:52 semiosis jostin: strace
19:52 semiosis or logs
19:52 semiosis should tell you whats going on
19:55 plarsen joined #gluster
20:02 jostin the /data03 glusterfsd proccess that show 400% for 2 days show no log issue(just some info with nothing special) strace looks stack
20:05 jostin i check the brick log etc..
20:05 jostin top, strace, fds..
20:15 zmotok joined #gluster
20:16 sankey joined #gluster
20:18 jostin version 3.3.1
20:22 jablonski joined #gluster
20:22 jablonski :( got disconnected a bit earlier and didn't notice.. perhaps someone answered on the question if it's possible to go back to a purely replicated volume from a distributed-replicated one?
20:23 jostin output of the problematic glusterfsd
20:23 jostin readv(41, [{"\2e2\363\0\0\0\0", 8}], 1) = 8
20:23 jostin readv(41, [{"\0\0\0\2\0\23\320\5\0\0\1J\​0\0\0\33\0\5\363\227\0\0\0$", 24}], 1) = 24
20:23 jostin readv(41, [{"\377\377\377\377\0\0\0\0\0\0\0\0\0\0\0\2​\0\0\0!\0\0\7\320\0\0\0\10\344A\30\33"..., 236}], 1) = 236
20:23 jostin futex(0x6599120, FUTEX_WAKE_PRIVATE, 1) = 1
20:23 jostin futex(0x656527c, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x6565278, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
20:23 jostin futex(0x6565250, FUTEX_WAKE_PRIVATE, 1) = 1
20:23 jostin joined #gluster
20:28 jostin vmstat show many threads are in ready stat r but the storage doesnt suppose to work much(not much clients connected) so i realy dont know why is running for 2 days like that
20:29 jostin defently a bug
20:31 jag3773 joined #gluster
20:36 psyl0n joined #gluster
20:45 sankey i'm trying to distribute a maildir across two servers using glusterfs
20:45 sankey the exim user on each server should be able to write into the maildir,
20:45 sankey and the dovecot user should be able to read from the maildir
20:46 sankey the problem is that I installed debian differently on each server which caused the uid/gid mappings for system users to be completely different
20:46 sankey is there a solution to this problem, or should i just reinstall debian?
20:46 sankey eh, technically i'm *replicating* the maildir across two servers, but that detail doesn't really change anything
20:46 jporterfield joined #gluster
21:00 semiosis sankey: i use puppet to manage users & groups on my systems
21:00 semiosis well, i use puppet to manage everything on my systems
21:01 CROS_ joined #gluster
21:26 psyl0n joined #gluster
21:51 askb joined #gluster
21:54 JoeJulian +1 for puppet
21:55 jobewan joined #gluster
22:40 F^nor joined #gluster
22:56 dhyan joined #gluster
22:58 tor joined #gluster
22:58 daMaestro joined #gluster
23:01 CROS___ joined #gluster
23:03 CROS___ Hey everyone. My servers are currently in a bad state and I'm trying to figure out how to fix things up. I have a replicated volume with just 2 servers, I had one of the server bricks offline for about a week and didn't realize. Any time I try adding it back in, though, the CPU usage spikes (I think it may be trying to self-heal). When the CPU usage spikes, it causes the mounts to be unresponsive for clients. Any ideas on what I can possibly do?
23:05 JoeJulian CROS_: What version?
23:05 theron joined #gluster
23:05 CROS___ 3.4
23:06 CROS___ I can find the minor version. one second
23:06 CROS___ 3.4.1
23:06 JoeJulian Ok, so it's probably not the old blocking self-heal problem that used to exist.
23:07 JoeJulian It may be re-establishing FDs and locks. That's can be a pretty expensive operation if there are any significant numbers of open files.
23:07 CROS___ Yeah, I actually had tried doing a replace brick operation (because I got bigger drives in place), but that basically killed the servers (on 3.3). So I attempted to upgrade and am now on 3.4 and I can't seem to get this second brick back in.
23:08 CROS___ Hmm, when would it establish the lock on a file?
23:09 CROS___ For every file on the brick?
23:09 JoeJulian every open file and open files with locks.
23:09 JoeJulian A lock is established by the software that opens the file.
23:10 CROS___ Would it open/lick the file to check the self heal?
23:11 JoeJulian There is a process for that, but it's not exclusive.
23:11 JoeJulian self-heal runs at a lower priority level than regular usage.
23:12 CROS___ got'cha
23:12 JoeJulian Should still max out your CPU and bandwith, but not at the expense of your usage.
23:12 CROS___ Is there anything I can check in the logs maybe?
23:12 JoeJulian Always good to check the client logs. Never can tell what you're going to find there.
23:13 JoeJulian Best bet, imho, is to wait until your least use time of day and bite the bullet.
23:13 CROS___ Yeah, it's strange that the mounts basically become unresponsive.
23:13 CROS___ heh
23:13 JoeJulian "basically" or "actually"?
23:13 CROS___ actually
23:14 JoeJulian basically implies that they're not unresponsive, just less responsive than you'd like. :D
23:14 JoeJulian What's the use case?
23:14 CROS___ data servers for serving a site. images and downlad files
23:15 JoeJulian Hmm, that doesn't sound like it should be a major troublemaker.
23:15 JoeJulian 50 open VM images, I could see...
23:16 CROS___ I run nginx on top to serve the files
23:16 CROS___ With about 200mbps throughput constant through the network for that
23:16 CROS___ But, it's really not stressing the drives at all. Mostly cached data.
23:16 JoeJulian Yeah, def would help to see what the client logs think the problem is.
23:16 CROS___ k
23:17 CROS___ will check
23:18 CROS___ The other thing I was thinking of doing is creating new volumes and just copying the data over
23:18 CROS___ Try to start fresh
23:18 daMaestro joined #gluster
23:19 CROS___ The main issue is that I would like to use the same two servers and I can't start the second server without spiking the CPU and taking the site offline.
23:21 qdk joined #gluster
23:21 CROS___ Weird. In the client every second or so I'm seeing this output in the logs: http://pastebin.com/X1xrAgw2
23:21 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
23:22 CROS___ oops: http://fpaste.org/65039/44574813/
23:22 glusterbot Title: #65039 Fedora Project Pastebin (at fpaste.org)
23:23 CROS___ oh, wait. that's not weird. client-0 is the brick that's down
23:26 tryggvil joined #gluster
23:46 jporterfield joined #gluster
23:50 davidbierce I've had to do 90 ish open VMs a few times with servers and bricks failing and re establishing FDs wasn't bad....except when a client refused to give up the lock :)
23:58 jporterfield joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary