Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 edward3 joined #gluster
00:16 MugginsM yay for turning the whole cluster of and on again
00:16 MugginsM off
00:17 gdubreui joined #gluster
00:24 yinyin_ joined #gluster
00:29 tomased joined #gluster
00:49 hchiramm_ joined #gluster
01:05 sjm joined #gluster
01:07 vpshastry joined #gluster
01:10 bala joined #gluster
01:28 hchiramm_ joined #gluster
01:38 DV_ joined #gluster
01:45 diegows joined #gluster
02:06 stickyboy joined #gluster
02:25 mjsmith2 joined #gluster
02:40 ceiphas_ joined #gluster
02:46 bharata-rao joined #gluster
03:14 sjm left #gluster
03:22 kshlm joined #gluster
03:41 kshlm joined #gluster
03:45 davinder joined #gluster
03:45 shubhendu joined #gluster
03:52 itisravi joined #gluster
03:55 RameshN_ joined #gluster
04:04 tomato joined #gluster
04:06 tomato if I cant use gluster for structured data (databases) but I use gluster as the backend sotrage for OpenStack, where can I store my databases? Swift, Cinder?
04:08 rastar joined #gluster
04:08 MugginsM what kind of databases?
04:10 haomaiwang joined #gluster
04:10 ppai joined #gluster
04:11 tomato MySql. I read that gluster does not support structured data very well. I am also trying to deploy Trove
04:12 tomato From the Basic Gluster Troubleshooting:  15. I am getting weird errors and inconsistencies from a database I am running in a Gluster volume
04:12 tomato Unless your database does almost nothing, this is expected. Gluster does not support structured data (like databases) due to issues you will likely encounter when you have a high number of transactions being persisted, lots of concurrent connections, etc. Gluster *IS*, however, a perfect place to store your database backups.
04:12 kumar joined #gluster
04:13 MugginsM yeah, if you want to cluster databases, you're best to use the databases own tools
04:13 MugginsM they like to manage their own files quite carefully
04:14 MugginsM MySql will do replication/clustering I think
04:14 tomato Just trying to understand what would be the underlying storage architecture of an openstack deployment. I thought gluster ingegrated with cinder and swift will be enough.
04:15 MugginsM I'm not familiar with openstack, sorry :-/
04:18 psharma joined #gluster
04:29 yinyin joined #gluster
04:30 kanagaraj joined #gluster
04:30 ndarshan joined #gluster
04:31 aviksil joined #gluster
04:32 meghanam joined #gluster
04:33 nthomas joined #gluster
04:39 aviksil joined #gluster
04:40 nishanth joined #gluster
04:43 hchiramm_ joined #gluster
04:45 kdhananjay joined #gluster
04:46 bala joined #gluster
04:48 gmcwhist_ joined #gluster
04:55 bala joined #gluster
04:58 rejy joined #gluster
05:07 stickyboy joined #gluster
05:11 aravindavk joined #gluster
05:13 hchiramm_ joined #gluster
05:16 RameshN joined #gluster
05:17 ramteid joined #gluster
05:17 hagarth joined #gluster
05:23 nshaikh joined #gluster
05:29 prasanthp joined #gluster
05:30 haomaiw__ joined #gluster
05:32 dusmant joined #gluster
05:38 kanagaraj joined #gluster
05:38 vpshastry joined #gluster
05:43 davinder joined #gluster
05:44 sahina joined #gluster
05:50 hagarth joined #gluster
06:00 vpshastry1 joined #gluster
06:00 kaushal_ joined #gluster
06:03 glusterbot New news from resolvedglusterbugs: [Bug 1086756] Add documentation for the Feature: zerofill API for GlusterFS <https://bugzilla.redhat.com/show_bug.cgi?id=1086756>
06:05 kanagaraj joined #gluster
06:07 lalatenduM joined #gluster
06:12 Philambdo joined #gluster
06:14 hagarth joined #gluster
06:14 rahulcs joined #gluster
06:18 ravindran1 joined #gluster
06:20 hchiramm_ joined #gluster
06:22 Philambdo joined #gluster
06:23 harish joined #gluster
06:26 vimal joined #gluster
06:33 glusterbot New news from resolvedglusterbugs: [Bug 1086757] Add documentation for the Feature: Disk Encryption <https://bugzilla.redhat.com/show_bug.cgi?id=1086757>
06:41 elico joined #gluster
06:48 ctria joined #gluster
06:53 rahulcs joined #gluster
06:55 _Bryan_ joined #gluster
06:55 dusmant joined #gluster
06:55 glusterbot New news from newglusterbugs: [Bug 1094815] [FEAT]: User Serviceable Snapshot <https://bugzilla.redhat.com/show_bug.cgi?id=1094815>
07:02 Philambdo joined #gluster
07:07 edward1 joined #gluster
07:12 tjikkun_work joined #gluster
07:13 Pupeno joined #gluster
07:19 keytab joined #gluster
07:24 kdhananjay joined #gluster
07:24 borreman_dk left #gluster
07:25 borreman_dk joined #gluster
07:25 rahulcs joined #gluster
07:27 Pupeno What's CTDB?
07:29 lalatenduM Pupeno, https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/ch09s05.html
07:29 glusterbot Title: 9.5. Configuring Automated IP Failover for NFS and SMB (at access.redhat.com)
07:29 hagarth Pupeno: you can find more on ctdb in samba.org too
07:33 kshlm joined #gluster
07:36 fsimonce joined #gluster
07:36 vpshastry joined #gluster
07:38 ktosiek joined #gluster
07:39 cppking left #gluster
07:48 rahulcs joined #gluster
07:55 monotek joined #gluster
07:59 ngoswami joined #gluster
07:59 Pupeno Can I add replicas to a volume after it's been created and it's running?
08:04 samppah yeah
08:06 Pupeno Mhhh... replicas require 2 to start with. I wish I could start with 1 and added the other one later.
08:06 ricky-ti1 joined #gluster
08:08 samppah Pupeno: you can start with one and expand it to 2 later :)
08:09 samppah it's not necessary to specify replicate when you create volume
08:09 samppah ie. gluster vol create volName server1:/brick1; gluster vol add-brick volName replica 2 server2:/brick1
08:11 Pupeno Mh... let me try again.
08:12 Pupeno When I do vol create volname server1:/brick1, I get a volume of type distribute, not replica.
08:13 samppah Pupeno: that's fine, it should convert it to replica once you add new brick with replica 2 option
08:13 Pupeno Ah, cool!
08:15 andreask joined #gluster
08:19 f31n joined #gluster
08:26 f31n left #gluster
08:29 doekia ?
08:29 doekia !@doekia factoid
08:30 doekia factoid
08:30 partner works, i've done it. now i am trying to do it the opposite direction, i have 2x2 dist-repl which i want to turn into distributed as we're running out of disk
08:31 partner seeing from the backlog it seems it should work too, not sure of the correct approach thought so i'll just try it out somewhere other than production first :o
08:33 partner or rather the issue is larger files than bricks so i will just remove half of the bricks and then expand the remaining ones with lvm to double size
08:34 ngoswami joined #gluster
08:35 Slashman joined #gluster
08:40 ThatGraemeGuy joined #gluster
08:42 hchiramm__ joined #gluster
08:43 hybrid512 joined #gluster
08:44 saurabh joined #gluster
08:47 rahulcs joined #gluster
09:12 hchiramm_ joined #gluster
09:18 ctria joined #gluster
09:21 liquidat joined #gluster
09:21 wgao joined #gluster
09:28 Pupeno If I have a replica with two bricks and each server is also a client, should they mount using the local server?
09:33 tziOm joined #gluster
09:36 lalatenduM Pupeno, there is no rule abt this
09:38 rahulcs joined #gluster
09:43 andreask I'd mount the local server so you never need to care about IPs or hostnames when mounting
09:45 rahulcs joined #gluster
09:45 rahulcs joined #gluster
09:48 davinder2 joined #gluster
09:52 ira joined #gluster
09:52 rahulcs joined #gluster
09:53 rahulcs joined #gluster
09:59 shubhendu joined #gluster
09:59 Pupeno I added this line to my fstab:
09:59 Pupeno profitmargin:/uploads /mnt/uploads glusterfs defaults,_netdev 00
09:59 Pupeno Why is it not mounting the volume at boot time?
10:14 rahulcs joined #gluster
10:26 koodough joined #gluster
10:32 ctria joined #gluster
10:37 qdk joined #gluster
10:40 rahulcs_ joined #gluster
10:41 kanagaraj joined #gluster
10:43 kanagaraj joined #gluster
10:47 Pupeno Ok, I posted all the info here: http://serverfault.com/questions/596942/glusterfs-fails-to-mount-on-boot-but-mounts-later-in-ubuntu-12-04
10:47 glusterbot Title: GlusterFS fails to mount on boot but mounts later in Ubuntu 12.04 - Server Fault (at serverfault.com)
10:48 rahulcs joined #gluster
10:49 karimb joined #gluster
10:49 karimb hi buddies. how can i troubleshoot a cifs issue with gluster ?
10:49 karimb given smbclient -L SERVER returns the gluster share
10:49 karimb and i get Retrying with upper case share name
10:49 karimb mount error(6): No such device or address
10:50 karimb when trying to mount it
10:55 rahulcs joined #gluster
11:05 fsimonce` joined #gluster
11:08 fsimonce` joined #gluster
11:13 ppai joined #gluster
11:16 partner Pupeno: maybe this helps to understand the issue: http://joejulian.name/blog/glusterfs-volumes-not-mounting-in-debian-squeeze-at-boot-time/
11:16 glusterbot Title: GlusterFS volumes not mounting in Debian Squeeze at boot time (at joejulian.name)
11:16 partner haven't tried out with ubuntu thought but possibly/probably the mount is attempted before everything required is loaded
11:17 Pupeno partner: there was such a bug, but it's supposed to be fixed now.
11:18 RameshN_ joined #gluster
11:20 Pupeno partner: my problem is not lack of fuse, or am I reading the logs wrongly?
11:25 RameshN joined #gluster
11:30 rjoseph joined #gluster
11:33 nshaikh joined #gluster
11:34 dusmant joined #gluster
11:40 RameshN joined #gluster
11:41 RameshN_ joined #gluster
11:44 partner hard to say but if a network mount does not come up on boot it strongly suggests the issue is somewhere in the initscripts, should be visible on the console thought if one should fail
11:44 partner might be its different but at least i've got some issues with that stuff
11:51 partner oh, sorry, i missed the link and some log entries of it which then again point towards resolving
11:56 dusmant joined #gluster
12:00 diegows joined #gluster
12:03 scuttle_ joined #gluster
12:05 sjm joined #gluster
12:07 mjsmith2 joined #gluster
12:07 askb joined #gluster
12:13 askb joined #gluster
12:16 itisravi_ joined #gluster
12:24 ctria joined #gluster
12:32 askb joined #gluster
12:33 yosafbridge joined #gluster
12:35 japuzzo joined #gluster
12:42 ravindran1 joined #gluster
12:43 karimb joined #gluster
12:43 ravindran1 left #gluster
12:44 Ark joined #gluster
12:45 ndarshan joined #gluster
12:46 haomaiwa_ joined #gluster
12:48 d-fence joined #gluster
12:52 ctria joined #gluster
12:55 karimb hi buddies
12:55 karimb does somebody know this error ?
12:55 karimb error probing vfs module 'glusterfs': NT_STATUS_UNSUCCESSFUL
13:02 DV_ joined #gluster
13:07 ndarshan joined #gluster
13:11 primechuck joined #gluster
13:12 rwheeler joined #gluster
13:13 ccha2 hum, I have an replicate on 2 servers, but I have 1 server with nfs-server which doesn't start
13:18 ccha2 ok it was rpcbind which didn't start
13:24 ndevos karimb: doe you have /usr/lib64/samba/vfs/glusterfs.so ?
13:25 karimb ndevos, indeed found out i was missing samba-glusterfs on the storage nodes
13:25 karimb ndevos, thanks though ;)
13:25 ndevos karimb: :)
13:27 mattrixh joined #gluster
13:34 bennyturns joined #gluster
13:36 dfrobins joined #gluster
13:40 dfrobins joined #gluster
13:40 hagarth joined #gluster
13:41 ccha2 is it possible to have this issue http://review.gluster.org/#/c/6343/ with 3.5.0 ?
13:41 glusterbot Title: Gerrit Code Review (at review.gluster.org)
13:41 ccha2 on 1 server, there isn't indices directory
13:41 ccha2 the other one had indices directory
13:42 ccha2 how can I fix it?
13:42 ccha2 manualy create it ?
13:45 ndevos ccha2: just quickly lookad at it, and it seems that this patch has not been included in a 3.5 release yet
13:46 dfrobins My system is repeatedly showing the same number of healed/failed during a "gluster volume heal homegfs".  When I look at the logs, there are thousands of "W [client-rpc-fops.c:1529:client3_3_inodelk_cbk] 0-homegfs-client-1: remote operation failed: Stale file handle" errors messages.  The heal doesn't seem to be working.
13:49 mjsmith2 joined #gluster
13:50 dfrobins This started occuring when I made a backup of several of my workstations onto the gluster mounted file system using rsync.  I have my gluster file system mounted as "gfsib01a.corvidtec.com:/homegfs on /homegfs type fuse.glusterfs (rw,allow_other,max_read=131072)" and I do an rsync from my workstation onto this mounted volume...  This seems to be the only time I am getting these problems with gluster
13:57 ccha2 ndevos: when I add something inside the volume from a client, the indices folder is created
13:58 ira joined #gluster
13:59 John_HPC joined #gluster
14:05 wgao joined #gluster
14:06 ndevos ccha2: right, so the problem is easily overcome, I understand?
14:14 ccha2 yes
14:15 jobewan joined #gluster
14:20 rahulcs joined #gluster
14:28 monotek joined #gluster
14:31 rahulcs joined #gluster
14:36 DV__ joined #gluster
14:39 ccha2 hum why if I created a 1G size file with dd somewhere and cp on glusterfs vol that took 20sec and if I dd same size on the glusterfs, that took 200sec ?
14:39 tdasilva joined #gluster
14:40 aviksil joined #gluster
14:40 ndevos ccha2: depends on the blocksize that you use, I guess - many small blocks have much more overhead than big blocks - and you should add a sync after the copy too
14:41 ccha2 bs=512
14:43 ccha2 and count=2000K
14:44 hagarth joined #gluster
14:44 ricky-ticky1 joined #gluster
14:45 lmickh joined #gluster
14:45 ccha2 yes I switched it and it's better
14:45 ccha2 dd and the overhead
14:46 ccha2 thank for the explaination
14:47 shubhendu joined #gluster
14:49 sulky joined #gluster
14:51 Staples84 joined #gluster
14:52 kaptk2 joined #gluster
14:56 mjsmith2 joined #gluster
14:58 monotek joined #gluster
15:01 coredump|br joined #gluster
15:06 jag3773 joined #gluster
15:08 Pupeno Inside the brick, I can see the same file structure as the volume, but the files contain something different inside, right?
15:14 beneliott joined #gluster
15:14 rwheeler joined #gluster
15:18 daMaestro joined #gluster
15:18 ProT-0-TypE joined #gluster
15:27 jbd1 joined #gluster
15:30 Pupeno What's the overhead per file of GlutonFS?
15:32 scuttle_ joined #gluster
15:33 Slashman Pupeno: some meta data, stored in the inode space, that's why when formating for glusterfs with xfs, you want to format with the option "-i size=512"
15:45 ndevos Pupeno: on normal distribute/replicated volumes, the files are just saved as files on the bricks, they only add some extended attributes
15:48 Pupeno Doing an experiment I just copied a bunch of empty files to a volume, and on the brick, the file structure is the same, but each file has a 100k of binary data on it.
15:48 Pupeno Nevermind that.
16:04 sprachgenerator joined #gluster
16:22 nikk_ joined #gluster
16:23 Mo___ joined #gluster
16:25 systemonkey joined #gluster
16:29 lyang0 joined #gluster
16:32 ramteid joined #gluster
16:34 wgao joined #gluster
16:36 rahulcs joined #gluster
16:43 sprachgenerator joined #gluster
16:43 lalatenduM joined #gluster
16:55 t35t0r joined #gluster
16:58 [o__o] joined #gluster
17:27 vpshastry joined #gluster
17:28 ktosiek joined #gluster
17:28 semiosis :O
17:35 fleducquede joined #gluster
17:35 flowouffff Hi guys
17:36 flowouffff is there anyone here for helping a guy struggling with an  gluster-swift error ( http://pastebin.com/mTk2ydLA )
17:36 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
17:36 flowouffff ?
17:37 flowouffff conf centos 6.3 / openstack havana / gluster-swift 1.10
17:38 flowouffff the main error is quite: Account Server 127.0.0.1:6012/volume_not_in_ring
17:38 flowouffff i cant figure out what's wrong
17:41 anotheral i've got a pretty serious problem with a rebalance happening when trying to remove an incorrectly-sized set of bricks
17:41 anotheral http://paste.ubuntu.com/7489509/
17:41 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
17:43 zaitcev joined #gluster
17:48 MacWinner joined #gluster
17:49 MacWinner On bootup, my gluster mount doesn't seem to take effect.  If I type mount -a after boot, then it looks like it works.  in /etc/fstab I have this:  elk1-internal:/storage /mnt/storage glusterfs defaults,_netdev 0 0
17:49 MacWinner didn't see anything apprent in /var/log entries.. but not sure if I'm looking in the right place
17:52 flowouffff check the mount logs in /var/log/glusterfs/
17:52 flowouffff MacWinner,
17:52 flowouffff you should probably find usefull messages
17:53 MacWinner cool, thanks.. I have been doing an upgrade of a 4-node cluster and was just rebooting because there were some kernel updates..
17:54 MacWinner don't see anything particularly interesting there..  curious though… have I configured my /etc/fstab entry correctly so that it _should_ mount on boot?
17:56 wushudoin joined #gluster
17:57 MacWinner think it's related to this: https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/876648
17:57 glusterbot Title: Bug #876648 “Unable to mount local glusterfs volume at boot” : Bugs : “glusterfs” package : Ubuntu (at bugs.launchpad.net)
17:57 MacWinner though I'm using Centos 6.4.. centos6.4 uses upstart i believe..
17:58 flowouffff hmm maybe sounds like u found the root cause
18:07 gmcwhistler joined #gluster
18:09 eightyeight joined #gluster
18:10 MacWinner yeah.. oddly enough, I had to read about upstart a bit a couple days ago since i'm migrating some of our custom services to it
18:10 semiosis upstart \m/
18:10 semiosis MacWinner: are you on ubuntu?
18:11 semiosis oh wait, i see you're on centos
18:11 MacWinner centos 6.4
18:11 semiosis yes, centos & ubuntu both use upstart, but the way they process fstab & mount filesystems are *very* different
18:11 semiosis that ubuntu bug shouldn't be relevant at all to centos
18:12 semiosis MacWinner: most likely the mount was attempted during boot, but it failed.  clear the client log file & reboot, then there should be a failure error in the glusterfs client log file
18:13 monotek joined #gluster
18:13 MacWinner semiosis, k… will do in just a minute.. in the middle of rebooting all machines.. this issue seems to be consistent though, so hopefully we can figure it out.  I have a pretty vanilla setup
18:14 anotheral so what does one do about duplicate directories on client mounts during a rebalance?
18:14 anotheral and how can I find out why this remove-brick operation has taken over a week with no sign of ending?
18:17 MacWinner semiosis, so I'm clearing out /var/log/glusterfs/mnt-storage.log
18:17 semiosis ok
18:17 MacWinner since my mount point /mnt/storage
18:20 MacWinner rebooted..  when I look for  /var/log/glusterfs/mnt-storage.log  it does not exist.. it's almost like the mount is not being run on reboot
18:22 MacWinner semiosis, should i just run the mount -a command now?
18:23 semiosis do you have netfs service starting at boot?
18:25 MacWinner semiosis, i think so.. but just noticed it's not at run level 3
18:25 MacWinner it's on for 4 and 5
18:25 semiosis that might be your problem!
18:25 semiosis brb
18:26 MacWinner cool, i'll try changing that and seeing if it works!  btw, I don't believe I've fiddled with the netfs settings, so this may be default for most ppl running centos 6.x
18:28 primechuck joined #gluster
18:29 kmai007 joined #gluster
18:30 rotbeard joined #gluster
18:31 semiosis default for most people is "gluster works" afaik
18:31 semiosis :)
18:32 MacWinner semiosis, cool, that seems to fix it!
18:32 semiosis yay
18:32 MacWinner semiosis, yeah.. gluster works beauitfully!  i tried all kinds of stuff..
18:32 semiosis great
18:32 MacWinner semiosis, just this little reboot thing.. but seems to be a default config issue with Centos
18:33 semiosis interesting
18:33 kmai007 so how many folks are using gluster3.4-3 ?
18:36 jag3773 joined #gluster
18:41 rahulcs joined #gluster
18:42 dbruhn joined #gluster
18:54 scuttle_ joined #gluster
18:56 andreask joined #gluster
18:58 JoeJulian kmai007: Pretty small sample for a survey. Why do you ask?
19:06 qdk joined #gluster
19:07 lpabon joined #gluster
19:09 kmai007 oh i just saw that 3.4.3-3 is GA, and I'm running 3.4.2 in production
19:09 kmai007 debating if i should upgrade it
19:26 John_HPC I was running 3.4.3-3; but upgraded to the latest 3.5 already
19:27 kmai007 and you love 3.5?
19:29 rahulcs_ joined #gluster
19:33 chirino joined #gluster
19:35 zerick joined #gluster
19:39 John_HPC So far. 3.5 has been working just fine.
19:40 John_HPC I'm spending most of my time repairing the data; as the servers were dropped in my lap after not being maintains for 3 years :P
19:40 John_HPC maintained*
19:40 kmai007 yikes
19:40 kmai007 how big is your environment?
19:40 kmai007 how many storage nodes and clients?
19:41 John_HPC Not that big. Its on a backend network of a server right now. But its 6 servers, with 6 bricks each in a 18x2 replication
19:41 John_HPC about 160ish TB of data
19:42 John_HPC Eventually it will be network mounted and accessable to our HPC clusters for data analysis.
19:42 eightyeight joined #gluster
19:45 asku joined #gluster
19:45 John_HPC Eventually, it may have up to 60 servers mounting it.
19:45 John_HPC Right now, I gotta get past the hdds failing ;)
19:49 weykent joined #gluster
19:51 anotheral so is there anything I can do about these duplicate directories?
19:53 weykent hi, i'm having some trouble mounting a glusterfs volume. it will successfully mount if i mount it with the device 'gluster1:/weasyl', but it fails if i try to mount it with a volfile containing what i think is the same information. the error i get on the client is 'failed to get the port number for remote subvolume'. the volfile is https://paste.weasyl.com/show/basDYuuLg9hOb8wE3AEI/
19:53 glusterbot Title: Paste #basDYuuLg9hOb8wE3AEI at spacepaste (at paste.weasyl.com)
19:53 kmai007 anotheral: duplicate directories on the storage or client?
19:53 johnmwilliams__ joined #gluster
19:54 anotheral on the client mount of the gluster
19:54 borreman_dk joined #gluster
19:54 edward1 joined #gluster
19:54 anotheral they show up as having the same inode
19:54 kmai007 anotheral: if you mount the volume up on another client, do you get the same listing?
19:54 kmai007 do you see that on the storage?
19:55 kmai007 have you tried to unmount and remount it?
19:55 nixpanic joined #gluster
19:55 silky joined #gluster
19:55 nixpanic joined #gluster
19:55 cogsu joined #gluster
19:55 txmoose joined #gluster
19:56 eightyeight joined #gluster
19:56 neoice joined #gluster
19:56 anotheral persists across remounts, appears on all clients
19:56 auganov joined #gluster
19:57 weykent alternately, is there a better way of mounting glusterfs on a client such that it can connect to one of multiple replicated glusterfs servers than using a volfile?
19:57 weykent afaik if i mount 'gluster1:/weasyl', it'll depend on the host 'gluster1' being up
19:57 avati joined #gluster
19:58 anotheral kmai007: looks like this: http://paste.ubuntu.com/7490014/
19:58 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
19:58 kmai007 anotheral: looks ilke you may have the case of split-brains
19:58 kmai007 anotheral: have you tried to run 'gluster volume heal <vol> heal info split-brain'
20:00 kmai007 weykent: if you are mounting by hand, you have to pick the server that is UP so it can retrieve the vol file
20:00 doekia joined #gluster
20:00 anotheral kmai007: "Volume gv0 is not of type replicate"
20:00 kmai007 weykent: in the /etc/fstab you can specify an alternative vol server if the main one is not available
20:00 weykent kmai007, ok, and what if i don't know which server is up? i'm trying to make this system properly redundant
20:00 weykent kmai007, oh i see. how do i do that?
20:01 kmai007 in /etc/fstab you can specify it as a mount option
20:01 kmai007 backupvolfile-server=hostname,fetch-attempts=3
20:02 weykent ah
20:02 anotheral huh, not in the man page
20:02 kmai007 alot is not in the man pages
20:02 kmai007 anotheral: what kind of setup is it?
20:02 kmai007 distributed?
20:02 kmai007 dist-rep?
20:02 * anotheral makes a note to contribute documentation to open-source projects
20:02 anotheral just distributed
20:03 kmai007 strange, then it shouldn't have dups.
20:03 anotheral this started happening when I ran some remove-brick operations
20:03 anotheral which have been running for like 8 days now just to migrate 1tb of data
20:03 kmai007 did u do a rebalance? i don't recall if that is the procedure when you remove/add bricks
20:04 kmai007 yeh i'm sorry i don't have much experience with that
20:04 anotheral it's 3.4, so the rebalance should be built into the remove-brick operation
20:04 anotheral if the docs are to be trusted
20:04 JoeJulian joined #gluster
20:04 kmai007 maybe an email to gluster-users@gluster.org
20:04 anotheral at this point we would be happy to pay for some kind of support
20:04 kmai007 you and me both
20:04 anotheral :-/
20:04 kmai007 i'm just another user
20:04 JoeJulian why do you keep saying that?!?! :P
20:04 kmai007 but the community is helpful, the email is a good source
20:05 JoeJulian Did you unmount and remount?
20:05 anotheral alright, will try that
20:05 kmai007 hahha tht was my 2nd suggestion
20:05 kmai007 scroll up
20:05 JoeJulian anotheral: I'm going to lunch. Be back in about an hour and can solve that problem then.
20:06 anotheral JoeJulian: ok i'll be here thanks!
20:08 ry joined #gluster
20:09 weykent kmai007, is backupvolfile-server new-ish? i'm using gluster 3.2 at the moment
20:09 kmai007 yeh man that is for 3.4
20:09 weykent dang
20:09 kmai007 i think that is only available for 3.3+
20:09 weykent been planning an upgrade to 3.5 anyway
20:10 weykent actually, that's another thing: can i upgrade straight from 3.2 to 3.5 without going through 3.3?
20:10 kmai007 gotcha, sorry i'm not much help
20:10 kmai007 you should send an email to gluster-users@gluster.org,
20:10 kmai007 i think 3.4-3.5 is ok, but i do'nt know about 3.2
20:10 weykent heh heh
20:12 weykent i hate mailing lists though :(
20:13 andreask joined #gluster
20:13 facemangus joined #gluster
20:13 facemangus If anybody is around, to bring an install at 2.0.9 up to 3.0+, is there an upgrade path or would it be easier to back up configs nuke it and start over? (production environment)
20:14 facemangus I am partial to fresh install just because the current one done by my predecessor is... unqiue in its craptacity
20:16 John_HPC weykent: you should be able to upgrade from 3.2 to 3.5. I upgraded earlier going from 3.2 to 3.4.3. --NOTE-- follow the upgrade instructions for 3.3; as it makes some changes you'll have to adjust for.
20:16 weykent John_HPC, okay, thanks
20:16 John_HPC specifically the locaiton of the config files
20:16 John_HPC http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/
20:17 John_HPC Anyway, followed that and it upgraded just fine.
20:18 semiosis facemangus: 2.0.9?!?!  that's archaic
20:18 facemangus Tell me about it >.>
20:18 facemangus I don't even have the gluster command
20:18 semiosis that was introduced in 3.1
20:19 chirino joined #gluster
20:19 facemangus In fact, I am not even sure 2.0.9 can even tell if it is brain split
20:19 facemangus Or run well hah
20:19 [o__o] joined #gluster
20:20 semiosis i started with gluster in 3.1, dont know much about how it was before that
20:20 facemangus :\
20:21 facemangus I might have to start over, it just means I will have to do it on my own time because LOL PRODUCTION
20:21 facemangus There will be a late night in my future
20:31 zerick joined #gluster
20:33 mjsmith2 joined #gluster
20:34 sage___ joined #gluster
20:34 sage___ joined #gluster
20:34 John_HPC facemangus: must love it when stuff just gets dropped in youir lap ;)
20:36 facemangus yup
20:37 facemangus an entire network of shi*
20:38 facemangus ancient gluster, improper nginx load balancing, bad habbits everywhere
20:38 facemangus I mean we all have bad habbits, but these are like "my first linux" mistakes
20:38 facemangus also: production environmnet
20:39 mjrosenb joined #gluster
20:39 gmcwhistler joined #gluster
20:39 * mjrosenb wonders what is in #glusterfs
20:41 facemangus Oh wow
20:42 facemangus boss just asked me to not update it, because downtime isn't acceptable even when your production load balancer is out of sync it isn't a big deal I guess
20:42 facemangus asked if we can manually sync it
20:42 facemangus SURE BOSS MAN
20:42 mjrosenb so, there is some code for "accelerating" programs by LD_PRELOAD'ing them to talk directly to the bricks, rather than to go through fuse.  Does anyone know how that works?
20:44 John_HPC facemangus: http://img3.wikia.nocookie.net/__cb20131217081420/cardfight/images/8/8a/Triple_facepalm.png
20:45 facemangus if they had grenades in their hands it would be more accurate
20:45 facemangus but I don't even care anymore, I love gluster I just wish I was allowed to support it properly
20:45 John_HPC yep
20:45 facemangus The last time I was here someone helped me dig myself out of a multimillion $ gluster crash
20:45 facemangus that shouldn't have happened
20:46 John_HPC good luck. I am out for the night
20:48 sadbox joined #gluster
20:49 chirino joined #gluster
20:53 sadbox joined #gluster
20:53 Licenser joined #gluster
20:58 badone joined #gluster
21:13 dfrobins left #gluster
21:24 JoeJulian anotheral: On the server that you ran the remove-brick on, there should be a log specific to that task. Let's start with an fpaste of the tail of that file.
21:27 mjrosenb JoeJulian: you know about everything! you happen to know about the non-fuse code for accessing bricks?
21:28 JoeJulian Not enough. What's the question?
21:33 qdk joined #gluster
21:36 anotheral JoeJulian: ok, there's not a log for remove-brick, but there's an active rebalance log
21:36 JoeJulian anotheral: Yep, that would be it.
21:36 anotheral lots of failed no space left on device errors
21:37 anotheral we have a lot of full bricks
21:37 anotheral but i can't add more bricks because of the ongoing rebalance
21:37 JoeJulian Cancel it
21:37 anotheral remove-brick stop?
21:37 JoeJulian sounds right.
21:37 anotheral gluster 3.4 btw
21:38 JoeJulian yep, stop
21:38 anotheral ok, it's stopped
21:38 JoeJulian Now you should be able to add-brick
21:39 gmcwhistler joined #gluster
21:39 anotheral so the backstory is that we added some bricks that were larger than the existing ones, expecting it to balance correctly
21:39 anotheral it did not, and we're trying to remove them now
21:39 JoeJulian Right. I read that.
21:40 anotheral "volume add-brick: failed: /data/gv0/brick37 or a prefix of it is already part of a volume"
21:40 glusterbot anotheral: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
21:40 anotheral glusterbot: thanks :)
21:40 glusterbot anotheral: I do not know about 'thanks :)', but I do know about these similar topics: 'thanks'
21:41 * JoeJulian loves that auto-faq.
21:41 anotheral hm, those were never added in the first place
21:42 JoeJulian Check the ,,(extended attributes) on that directory (and any parent directories)
21:42 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
21:44 anotheral ah yes, it does have a volume-id
21:45 anotheral and one of them has a gfid also
21:45 anotheral definitely not showing up in the volume status though
21:45 anotheral they're all empty
21:46 anotheral ugh no they're not
21:46 anotheral the last one with the gfid has data in it
21:46 sadbox joined #gluster
21:47 anotheral fuuuuu
21:50 anotheral so somehow it's being used for the rebalance without having been correctly added?
21:51 semiosis JoeJulian: mjrosenb was asking about using LD_PRELOAD to give apps direct access to gluster
21:51 anotheral at this point it's a race against the rsync finishing so we can nuke and start over :-(
21:52 anotheral but 60TB is a lot to rsync
21:52 JoeJulian Yeah
21:52 semiosis mjrosenb:  there used to be a lib called booster which did that, i think.
21:52 stickyboy joined #gluster
21:53 JoeJulian mjrosenb: Here's how we overrode the o_direct flag on open until fuse was patched to fix this: https://github.com/avati/liboindirect
21:53 glusterbot Title: avati/liboindirect · GitHub (at github.com)
21:54 JoeJulian mjrosenb: Using that example, it shouldn't be all that hard to override all posix file operations to use the libgfapi equivalent.
21:55 JoeJulian mjrosenb: You would just need some way of initializing the server and volume.
21:56 KennethWilke joined #gluster
21:56 mjrosenb ahh, libgfapi, I bet that is what I'm thinking of.
21:56 mjrosenb so if you remember, my bricks are all freebsd, because <3 zfs
21:57 mjrosenb and since that is also where my temp storage is befor it gets moved into gluster, Id like to be able to just copy it to the bricks directly, bypassing the slow linux server that actually speaks gluster+fuse
21:58 anotheral so JoeJulian, am I pretty much hosed?
21:58 mjrosenb so I figured I could use the gluster-non-fuse thing to access it directly on the freebsd bricks
21:58 mjrosenb since glusted doesn't speak fuse on freebsd.
21:59 KennethWilke howdy guys, i was looking at managing gluster with salt and found that the module author expects the brick path to be the same on all peers. this is a bad assumption correct?
22:00 mjrosenb KennethWilke: in general, yes, but I think salt generally assumes that your systems are going to be pretty homogenous.
22:01 rwheeler joined #gluster
22:01 karimb joined #gluster
22:01 KennethWilke mjrosenb, alrighty thanks for confirming that, i'll try to get a PR to fix that module tomorrow
22:02 anotheral also, is it ok to leave the remove-brick in a stopped state?
22:03 JoeJulian yes
22:04 JoeJulian That is, yes it's okay
22:04 anotheral at least this is a read-only fs for the most part
22:04 anotheral don't have to worry about more data being shoveled in
22:05 JoeJulian mjrosenb: Sure, there are python hooks built which would, imho, be quickest and lightest.
22:05 JoeJulian Or you can use C, java, go...
22:09 avati JoeJulian, it's not all that hard to trap all the system calls.. but to get the semantics right can be quite tricky, especially when processes fork()
22:10 JoeJulian Ah
22:10 avati a ptrace based implementation can make things much more easier to implement semantics, but i suspect the performance gains would not be as high as libgfapi
22:11 avati and to use it as a general purpose non-fuse solution might not be true either.. liboindirect like LD_PRELOAD is glibc specific.. so would not ncessarily work on freebsd
22:11 P0w3r3d joined #gluster
22:11 avati freebsd might support something equivalent to LD_PRELOAD (not sure wht it is), so would need bindings to that technology
22:13 avati i'm tempted to revive booster next
22:13 avati even though it does not work 100%, it could still be useful where it works 80% of the times
22:14 mjrosenb JoeJulian: well, presumably, I'd want to patch rsync, bash, and ls?
22:14 avati patch ls?
22:20 JoeJulian anotheral: about the brick you wanted to add that had data: is that brick part of the volume (gluster volume info)? If not, why not format it and add it.
22:20 mjrosenb avati: yeah, patch them to use the libgluster open/getdirents/etc. rather than the libc versions.
22:21 anotheral ah weird - it shows up in volume info, but not volume status
22:24 JoeJulian I would guess that if it's not in status, it's not started.
22:24 JoeJulian anotheral: "gluster volume start $volname force"
22:24 JoeJulian or restart glusterd on that server.
22:25 anotheral just the one who owns that brick?
22:25 JoeJulian yep
22:26 Pupeno Any ideas why my volumes are not being mounted at boot time? http://serverfault.com/questions/596942/glusterfs-fails-to-mount-on-boot-but-mounts-later-in-ubuntu-12-04
22:26 glusterbot Title: GlusterFS fails to mount on boot but mounts later in Ubuntu 12.04 - Server Fault (at serverfault.com)
22:27 anotheral hmm, did both just to be sure - brick still doesn't show up in status
22:27 JoeJulian anotheral: Check the brick log.
22:28 JoeJulian Pupeno: upstart is trying to start your mount before the brick servers are running.
22:28 anotheral is none
22:29 * JoeJulian raises an eyebrow...
22:29 Pupeno JoeJulian: by brick servers, do you mean glusterfs server? or some sort of server that runs on the client machine?
22:29 JoeJulian anotheral: peer status from 2 different servers all looks okay?
22:30 JoeJulian ~processes | Pupeno
22:30 glusterbot Pupeno: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more
22:30 glusterbot information.
22:30 anotheral yup
22:31 mjrosenb JoeJulian: Server not found
22:31 Pupeno JoeJulian: the brick server is running on another vm that was running when this vm got booted.
22:31 JoeJulian anotheral: Show me volume info and volume status please.
22:31 mjrosenb Firefox can't find the server at gluster.helpshiftcrm.com
22:32 JoeJulian @change processes s/See.*//
22:32 glusterbot JoeJulian: Error: The command "change" is available in the Factoids, Herald, and Topic plugins. Please specify the plugin whose command you wish to call by using its name as a command before "change".
22:32 JoeJulian @factoids change processes s/See.*//
22:32 glusterbot JoeJulian: Error: 's/See.*//' is not a valid key id.
22:32 JoeJulian @factoids change processes 1 s/See.*//
22:32 glusterbot JoeJulian: The operation succeeded.
22:33 anotheral JoeJulian: ugh, sorry PEBCAK
22:33 JoeJulian :)
22:33 anotheral i was confusing 'brick37' then brick index with 'brick37' the mountpoint
22:34 JoeJulian makes sense
22:34 anotheral i appreciate the help, but i think our best bet is to wait a week for the rsync to finish, and then nuke from orbit
22:34 JoeJulian okie-dokie
22:34 anotheral speaking of inherited clusters :-/
22:34 JoeJulian I'm about to inherit one myself.
22:38 anotheral hopefully the duplicate directories don't screw up the rsync
22:38 anotheral if you're interested in figuring out what's causing that on a non-replicated cluster for your own edification, let me know
22:39 MugginsM joined #gluster
22:45 JoeJulian I'm pretty sure I already know. I've caused it before, even recently when I was trying to cause it.
22:45 anotheral is there a realitively straightforward fix?
22:47 JoeJulian Start at the root and make sure they have the same gfid and that none of the .glusterfs/00/00/000*001 are directories (they should be symlinks).
22:47 JoeJulian that's directly on the bricks, of course.
22:50 anotheral all right
22:50 JoeJulian Root will all be gfid 0000*0001 ...
22:51 anotheral that'd be the trusted.gfid attr?
22:51 JoeJulian yes
22:52 anotheral ah, except for that one weird brick, they all have the same gfid
22:53 anotheral let's see if that's the guy with the duplicate data
22:53 JoeJulian use setfattr to fix it.
22:55 anotheral will that allow glusterd to pick it up as a brick?
22:55 anotheral it's the one with data and attributes, but not listing in status or info
23:01 [o__o] joined #gluster
23:06 JoeJulian anotheral: Oh, no. That's not a brick. I would lean toward it being likely that the files on that drive are already (still) on the volume.
23:08 anotheral everything else appears to have proper gfids and symlinks
23:19 mjsmith2 joined #gluster
23:49 badone joined #gluster
23:50 mjsmith2 joined #gluster
23:52 mjsmith2_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary