Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 edong23 joined #gluster
00:33 bala joined #gluster
00:41 Cry joined #gluster
00:42 Cry I am new to gluster.  Am having some issues if someone might advise.
00:42 Cry One issue is that when I try to run automake, it blocks on a lock in the autom4ke file
01:00 yinyin joined #gluster
01:00 mooperd joined #gluster
01:16 raghug joined #gluster
01:29 kevein joined #gluster
01:52 harish joined #gluster
01:56 mooperd joined #gluster
01:59 asias joined #gluster
02:25 jmeeuwen_ joined #gluster
02:30 MugginsM joined #gluster
02:42 lalatenduM joined #gluster
02:53 vshankar joined #gluster
02:54 bharata joined #gluster
03:11 recidive joined #gluster
03:11 kkeithley joined #gluster
03:19 jag3773 joined #gluster
03:31 sgowda joined #gluster
03:32 hagarth joined #gluster
03:42 rastar joined #gluster
03:44 raghug joined #gluster
03:47 vshankar joined #gluster
03:53 bulde joined #gluster
03:57 bulde1 joined #gluster
04:03 jag3773 joined #gluster
04:14 vpshastry joined #gluster
04:30 bala joined #gluster
04:30 hchiramm_ joined #gluster
04:32 kshlm joined #gluster
04:36 satheesh joined #gluster
04:45 sgowda joined #gluster
04:45 raghug joined #gluster
04:48 vpshastry1 joined #gluster
04:49 vshankar joined #gluster
04:50 CheRi joined #gluster
04:53 rjoseph joined #gluster
04:57 hchiramm_ joined #gluster
04:57 shylesh joined #gluster
05:01 raghug joined #gluster
05:11 lalatenduM joined #gluster
05:12 ninkotech joined #gluster
05:19 hchiramm__ joined #gluster
05:19 sgowda joined #gluster
05:20 satheesh joined #gluster
05:27 psharma joined #gluster
05:27 sjoeboo joined #gluster
05:34 yinyin joined #gluster
05:38 _pol joined #gluster
05:43 psharma joined #gluster
05:43 raghug joined #gluster
05:48 hagarth joined #gluster
06:00 mooperd_ joined #gluster
06:02 vshankar_ joined #gluster
06:02 CheRi_ joined #gluster
06:03 bala1 joined #gluster
06:03 lkoranda_ joined #gluster
06:03 psharma joined #gluster
06:07 eryc joined #gluster
06:07 eryc joined #gluster
06:11 guigui1 joined #gluster
06:15 rgustafs joined #gluster
06:15 raghug joined #gluster
06:17 deepakcs joined #gluster
06:26 CheRi_ joined #gluster
06:27 psharma joined #gluster
06:28 Recruiter joined #gluster
06:29 eryc joined #gluster
06:29 eryc joined #gluster
06:30 lalatenduM joined #gluster
06:39 vpshastry1 joined #gluster
06:39 satheesh joined #gluster
06:40 ricky-ticky joined #gluster
06:43 puebele1 joined #gluster
06:44 SynchroM joined #gluster
06:44 ekuric joined #gluster
06:53 StarBeast joined #gluster
06:56 vpshastry1 joined #gluster
07:01 ctria joined #gluster
07:03 ngoswami joined #gluster
07:05 raghug joined #gluster
07:12 bulde joined #gluster
07:17 CheRi_ joined #gluster
07:21 bharata_ joined #gluster
07:24 dobber joined #gluster
07:56 satheesh joined #gluster
07:58 pkoro joined #gluster
08:10 piotrektt joined #gluster
08:10 piotrektt joined #gluster
08:13 chirino joined #gluster
08:24 recidive joined #gluster
08:27 dpkshetty joined #gluster
08:28 cicero_ joined #gluster
08:28 xavih_ joined #gluster
08:28 abyss^_ joined #gluster
08:28 shishir joined #gluster
08:29 sac_ joined #gluster
08:29 MinhP_ joined #gluster
08:29 JonnyNomad_ joined #gluster
08:30 semiosis joined #gluster
08:30 _risibusy joined #gluster
08:35 hateya_ joined #gluster
08:39 bulde joined #gluster
08:47 hybrid5122 joined #gluster
08:49 hateya__ joined #gluster
09:00 vrturbo joined #gluster
09:01 rastar joined #gluster
09:04 mmalesa joined #gluster
09:05 mmalesa_ joined #gluster
09:15 harish joined #gluster
09:25 bala joined #gluster
09:31 Oneiroi joined #gluster
09:34 shishir joined #gluster
09:36 _br_ joined #gluster
09:38 ngoswami joined #gluster
09:39 rgustafs joined #gluster
09:55 manik joined #gluster
10:05 spider_fingers joined #gluster
10:09 gasha joined #gluster
10:10 yinyin joined #gluster
10:12 cao joined #gluster
10:14 ctria joined #gluster
10:18 bulde joined #gluster
10:18 lalatenduM joined #gluster
10:20 _BuBU joined #gluster
10:20 _BuBU Any advice on upgrading from 3.3.2 to 3.4 ?
10:20 bala joined #gluster
10:20 _BuBU I'm using semiosis 3.3 ubuntu repos
10:22 kkeithley_ http://vbellur.wordpress.com/2013/​07/15/upgrading-to-glusterfs-3-4/
10:22 glusterbot <http://goo.gl/SXX7P> (at vbellur.wordpress.com)
10:25 _BuBU ok thx
10:28 bulde1 joined #gluster
10:35 ujjain joined #gluster
10:42 DWSR joined #gluster
10:42 DWSR joined #gluster
10:43 pkoro joined #gluster
10:57 SynchroM joined #gluster
11:00 lkoranda joined #gluster
11:10 samppah @splitbrain
11:10 glusterbot samppah: To heal split-brain in 3.3+, see http://goo.gl/FPFUX .
11:13 hagarth joined #gluster
11:22 ghftggfhhjj joined #gluster
11:24 CheRi_ joined #gluster
11:36 FrankGruellich joined #gluster
11:38 FrankGruellich Hi.  Does GlusterFS support concurrent writes to the same file from 2 clients to a replica 2 volume?
11:45 DWSR joined #gluster
11:46 DWSR joined #gluster
11:47 ctria joined #gluster
11:49 saurabh joined #gluster
11:51 partner is it just me or is download.gluster.org not responding?
11:52 FrankGruellich I get all kind of weird errors like: "[2013-07-24 10:51:04.025977] E [dht-helper.c:1052:dht_inode_ctx_get] (-->/usr/lib64/glusterfs/3.4.0/xlator/cluster/di​stribute.so(dht_lookup_linkfile_create_cbk+0x75) [0x7fa3076f4025] (-->/usr/lib64/glusterfs/3.4.0/xlator/clus​ter/distribute.so(dht_layout_preset+0x5e) [0x7fa3076db01e] (-->/usr/lib64/glusterfs/3.4.0/xlator/cluster​/distribute.so(dht_inode_ctx_layout_set+0x34) [0x7fa3076dc364]))) 0-tst_ngmfs_aws01-dh
11:52 FrankGruellich What does Gluster want to tell me?
11:53 FrankGruellich partner: download.gluster.org works for me.
11:54 markus_gl joined #gluster
11:54 partner hmm i'm trying .com for some reason, i think some page has wrong link..
11:54 partner no..
11:55 partner yes :)
11:55 partner ok, cool, thanks, noticed my issue here with the .com, originates from vijay's upgrade page
11:55 markus_gl hi@all
12:00 lalatenduM joined #gluster
12:00 ngoswami joined #gluster
12:00 markus_gl I have some bigger problems with glusterfs mounted via nfs. Is somebody willing to help me. I would appreciate ...
12:10 willytell joined #gluster
12:10 hagarth joined #gluster
12:12 samppah hagarth: ping, about bug 953887
12:12 glusterbot Bug http://goo.gl/tw8oW high, high, ---, pkarampu, MODIFIED , [RHEV-RHS]: VM moved to paused status due to unknown storage error while self heal and rebalance was in progress
12:12 mmalesa joined #gluster
12:12 mmalesa joined #gluster
12:14 SynchroM joined #gluster
12:14 kkeithley_ @ports
12:14 glusterbot kkeithley_: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
12:15 Humble joined #gluster
12:15 markus_gl @nfs
12:15 glusterbot markus_gl: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
12:21 bala joined #gluster
12:23 CheRi_ joined #gluster
12:24 pkoro joined #gluster
12:31 edward1 joined #gluster
12:35 markus_gl cp of a large file to a nfs-mounted glusterfs leads to stall at some point, where the transfer does not proceed ...
12:37 markus_gl downloading some large file like an fedora image to that location works fine
12:38 willytell left #gluster
12:42 Humble joined #gluster
12:51 aliguori joined #gluster
12:58 manik joined #gluster
13:05 lalatenduM joined #gluster
13:08 bulde joined #gluster
13:10 Humble joined #gluster
13:11 kedmison joined #gluster
13:12 lubko joined #gluster
13:12 lubko hello world
13:13 TuxedoMan joined #gluster
13:13 lubko dht translator is the one that's responsible for distributing mkdirs across all subvolumes, right?
13:14 lubko if that's too expensive for me (deep hierarchy with a couple of files in a directory), would it be a good idea to cut the path in two and only hash and replicate only the left part, while distributing the right part to the same subvolumes?
13:15 vpshastry joined #gluster
13:15 * lubko is open to any other ideas about reducing mkdir overhead in distributed scenario
13:19 hybrid5122 joined #gluster
13:22 asias joined #gluster
13:23 hybrid5123 joined #gluster
13:26 hybrid5122 joined #gluster
13:29 hybrid5123 joined #gluster
13:30 hybrid5122 joined #gluster
13:41 vpshastry joined #gluster
13:41 Humble joined #gluster
13:43 dobber joined #gluster
13:57 kaptk2 joined #gluster
13:57 bugs_ joined #gluster
13:58 shylesh joined #gluster
14:00 pkoro joined #gluster
14:04 failshell joined #gluster
14:04 dewey joined #gluster
14:05 mmalesa_ joined #gluster
14:15 VIP-ire joined #gluster
14:16 rwheeler joined #gluster
14:17 Humble joined #gluster
14:27 aliguori joined #gluster
14:28 baoboa joined #gluster
14:37 morse joined #gluster
14:38 yinyin joined #gluster
14:39 puebele joined #gluster
14:42 Humble joined #gluster
14:56 _pol_ joined #gluster
15:01 bstr Hey guys, i have a replica gluster volume normally running on 2 nodes, currently only one (hardware issues with the second node). & a user accidentally removed data from the unerlying brick in the setup, as opposed to removing data from the fuse mount
15:01 bstr i tried to self heal, but it was 'unsuccessful' and does not appear to be online anymroe when looking at volume status
15:01 bstr any reccomendations?
15:05 vshankar joined #gluster
15:05 sprachgenerator joined #gluster
15:05 VIP-ire if you removed data on the only alive brick, I'm afraid you're out of luck
15:07 bstr VIP-ire : i can bring the other brick online temperarily
15:07 bstr were indifferent about recovering the data, we would just like to get gluster back up and running in normal shape
15:08 daMaestro joined #gluster
15:10 kshlm joined #gluster
15:11 manik joined #gluster
15:11 _pol joined #gluster
15:11 Humble joined #gluster
15:14 kshlm Does anyone know how to request for Fedora package rebuilds? The glusterfs packages in the updates-testing repo for F19 are faulty.
15:14 DWSR joined #gluster
15:14 DWSR joined #gluster
15:15 plarsen joined #gluster
15:17 semiosis kshlm: you can always file a bug
15:17 glusterbot http://goo.gl/UUuCq
15:17 semiosis kshlm: or you can describe the problem here... the right people may be listening
15:19 jskinner_ joined #gluster
15:20 jskinner_ Is there any disadvantages to setting performance.io-thread-count to a higher value then 16?
15:20 lubko kshlm: add a negative karma with comment in bodhi (admin.fedoraproject.org/updates)
15:27 bala joined #gluster
15:31 VIP-ire I've just restarted one node of a replicated cluster (only two nodes), running Gluster 3.4
15:31 VIP-ire self heal seems to be re-syncing the files
15:31 VIP-ire but the output of volume heal info gives me gfid instead of file path
15:32 VIP-ire [root@dd8 ~]# gluster volume heal vmstore info
15:32 VIP-ire Gathering Heal info on volume vmstore has been successful
15:32 VIP-ire Brick dd8:/mnt/bricks/vmstore
15:32 VIP-ire Number of entries: 1
15:32 VIP-ire <gfid:a775be75-bc90-4ced-a4cd-ad05647ad5ad>
15:32 VIP-ire Brick dd9:/mnt/bricks/vmstore
15:32 VIP-ire Number of entries: 0
15:32 VIP-ire [root@dd8 ~]#
15:32 VIP-ire anyone knows why ?
15:33 JonnyNomad joined #gluster
15:34 VIP-ire this is on the node which has just been restarted
15:34 VIP-ire if I run the same on the other node, the output is normal (file path)
15:38 kshlm The glusterd.service file used in the f19 packages depend on glusterfsd.service, which is actually useless and doesn't work. This prevents glusterd.service from starting. This came up a little while back on the gluster-devel mailing list.
15:39 _pol joined #gluster
15:39 kshlm lubko: Will do that.
15:40 Humble joined #gluster
15:41 lubko kshlm: thank you!
15:43 vpshastry joined #gluster
15:43 markus_gl joined #gluster
15:44 DWSR joined #gluster
15:44 DWSR joined #gluster
15:45 _pol_ joined #gluster
15:48 rjoseph joined #gluster
15:50 Shahar joined #gluster
16:04 JonnyNomad joined #gluster
16:10 Guest91027 joined #gluster
16:13 [1]corni joined #gluster
16:15 [1]corni Hello, I'm looking into using gluster for the following usecase and I'd like to know whether gluster is suited for my use-case: We have about 10 servers, which need to access about 1MB of shared data on a filesystem. For the serversoftware, the data is read-only, and it is only updated each other week by hand. Currently, the employee updating the data executes a script which rsyncs the file to all servers, which is obviously annoying an
16:15 [1]corni script
16:16 [1]corni Our requirement is to replace that
16:16 [1]corni We still want to keep the possibility to just shutdown a server without having to think about consistency/data loss/etc
16:16 [1]corni that's why we can't use a NFS, a SPOF is not tolerable
16:17 [1]corni And from a sysadmin perspective, I'd prefer not to use a huuuuge system for 1MB of shared data
16:17 sunilva84 joined #gluster
16:23 manik joined #gluster
16:27 vshankar joined #gluster
16:30 vpshastry left #gluster
16:31 zaitcev joined #gluster
16:34 _pol joined #gluster
16:37 Recruiter joined #gluster
16:38 mmalesa joined #gluster
16:38 Humble joined #gluster
16:40 markus_gl @corni I would like to have a setup you described
16:40 markus_gl with the same concerns
16:40 mmalesa joined #gluster
16:40 [1]corni I'm just unsure whether glusterfs is appropriate for the job, or whether it is complete overkill and there's an easier tool for the job available
16:40 markus_gl i need to set 32-bit inodes for firefox and thunderbird 32 bit
16:41 markus_gl do you know lsyncd?
16:41 [1]corni no
16:41 markus_gl a combo if inotify and rsync
16:42 [1]corni that sounds a bit hacky-ish
16:42 markus_gl at the beginning we want place all homes on a big login-server to a glusterfs
16:43 markus_gl but i don't think glusterfs is able to do that fast enough
16:43 hagarth joined #gluster
16:44 markus_gl maybe with your 1MB you could be happy with glusterfs
16:45 jskinner_ Does any one have recommendations for benchmarking volume response latency from a gluster client?
16:47 daMaestro joined #gluster
16:50 Recruiter joined #gluster
16:51 krink joined #gluster
16:54 plarsen joined #gluster
16:58 wushudoin joined #gluster
17:01 jclift_ joined #gluster
17:01 bulde joined #gluster
17:04 Recruiter2 joined #gluster
17:06 recidive joined #gluster
17:07 _pol joined #gluster
17:09 sunilva84 joined #gluster
17:10 Recruiter joined #gluster
17:17 lpabon joined #gluster
17:17 satheesh joined #gluster
17:22 Recruiter joined #gluster
17:25 Technicool joined #gluster
17:27 glusterbot New news from resolvedglusterbugs: [Bug 852406] For non-replicate type volumes, do not print brick details for "gluster volume heal info ". <http://goo.gl/eYifc> || [Bug 947824] process is getting crashed during dict_unserialize <http://goo.gl/bTvo4> || [Bug 959969] glustershd.log file populated with lot of "disconnect" messages <http://goo.gl/N4cBDR> || [Bug 968301] improvement in log message for self-heal failure on file/dir in fuse mount lo
17:32 _pol joined #gluster
17:34 hagarth joined #gluster
17:36 samppah @bug 953887
17:36 glusterbot samppah: Bug http://goo.gl/tw8oW high, high, ---, pkarampu, CLOSED CURRENTRELEASE, [RHEV-RHS]: VM moved to paused status due to unknown storage error while self heal and rebalance was in progress
17:37 samppah closed?
17:37 hagarth well, that doesn't seem right.
17:38 lalatenduM joined #gluster
17:43 crashmag joined #gluster
17:47 Technicool joined #gluster
17:50 mmalesa joined #gluster
17:50 hagarth samppah: will check with Pranith on that one.
17:54 samppah hagarth: thanks!
17:55 hagarth @channelstats
17:55 glusterbot hagarth: On #gluster there have been 160530 messages, containing 6788063 characters, 1137745 words, 4616 smileys, and 603 frowns; 1023 of those messages were ACTIONs. There have been 60653 joins, 1893 parts, 58769 quits, 20 kicks, 158 mode changes, and 6 topic changes. There are currently 202 users and the channel has peaked at 217 users.
17:56 Technicool joined #gluster
18:08 glusterbot joined #gluster
18:19 bstr joined #gluster
18:21 Humble joined #gluster
18:22 mmalesa joined #gluster
18:29 Recruiter joined #gluster
18:34 mmalesa joined #gluster
18:36 glusterbot joined #gluster
18:46 Humble joined #gluster
18:49 scyld joined #gluster
18:49 scyld hello.
18:49 glusterbot scyld: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:53 scyld I'm alsolutely new to GlusterFS and managed to setup 2 peers (with quick start guide) now I'm trying to mount gluster volume from client and here are errors I see: http://pastebin.com/LCQzb5En
18:53 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
18:53 scyld please help
18:53 rwheeler joined #gluster
19:00 eightyeight joined #gluster
19:05 mooperd joined #gluster
19:06 al joined #gluster
19:08 Humble joined #gluster
19:20 ProT-0-TypE joined #gluster
19:29 recidive joined #gluster
19:38 harish joined #gluster
19:40 soukihei joined #gluster
19:44 Technicool joined #gluster
19:46 bstr joined #gluster
19:50 lalatenduM joined #gluster
19:58 chirino joined #gluster
20:03 lautriv joined #gluster
20:13 _pol joined #gluster
20:23 lautriv left #gluster
20:23 tg2 http://lists.nongnu.org/archive/html​/gluster-devel/2012-12/msg00031.html
20:23 glusterbot <http://goo.gl/59SD1> (at lists.nongnu.org)
20:23 tg2 check that
20:24 tg2 I'm sure ther eis a more secure option than disabling that
20:25 semiosis tg2: option for what?
20:27 MugginsM joined #gluster
20:27 ProT-0-TypE joined #gluster
20:27 semiosis @later tell scyld your client is version 3.4.0 and your server is version 3.2.5 -- that's not going to work. recommended to have all clients & servers use same version of glusterfs.
20:27 glusterbot semiosis: The operation succeeded.
20:28 tg2 response was to @scyld
20:29 tg2 didnt' see the server/client version mismatch
20:29 tg2 i thought you could use a newer client than the server
20:29 tg2 but not the other way around
20:30 semiosis usually upgrade servers before clients, but in general always best to have same version all around
20:30 semiosis the 3.2 to 3.3 upgrade required downtime iirc, however with 3.3 to 3.4 clients *should* be able to work with upgraded servers.  still risky though
20:30 semiosis ,,(3.4)
20:30 glusterbot 3.4 sources and packages are available at http://goo.gl/zO0Fa Also see @3.4 release notes and @3.4 upgrade notes
20:31 semiosis ,,(3.4 upgrade notes)
20:31 glusterbot http://goo.gl/SXX7P
20:36 Shdwdrgn joined #gluster
20:41 _pol joined #gluster
21:10 soukihei joined #gluster
21:18 _pol joined #gluster
21:30 _pol joined #gluster
21:34 tg2 joined #gluster
21:35 tg2 joined #gluster
21:36 Shdwdrgn joined #gluster
21:40 StarBeas_ joined #gluster
22:09 ipalaus joined #gluster
22:09 ipalaus hi
22:09 glusterbot ipalaus: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:12 mick271 joined #gluster
22:12 mick271 Hello folks
22:12 mick271 having a little issue and google is not helping me here
22:13 mick271 I am trying to create a new volume with a replica 2, but for some reason gluster is telling me my host1, the one I am running the command on, is not in 'Peer in cluster' state
22:14 ipalaus my 2 servers that were sharing a directory with GlusterFS just got rebooted and there is no way I can one of them working with gluster. I'm trying just to mount on the main server but it isn't working. When I go to the gluster console and I do a volume info all, there is no response at all… I don't know where to start to fix this.
22:14 mick271 I understand I am not supposed to run gluster peer probe on the host1 itself
22:15 ipalaus http://paste.laravel.com/Dqb <- this are some of the logs for the specific volume
22:15 glusterbot Title: Laravel Paste Bucket (at paste.laravel.com)
22:21 mick271 anyone alive ? :)
22:23 markus_gl @ipalaus is your glusterd service alive?
22:25 markus_gl @mick271 did you "peer probe $peerserver" on both? what says "peer status"?
22:27 mick271 markus_gl:  yes I did, the status give the other server on each side
22:27 markus_gl peer detach and try again
22:27 markus_gl had these issues sometimes but rare
22:27 mick271 I did. but let me retry it
22:29 ipalaus markus_gl, if I do a /etc/init.d/glusterfs-server status it says "Starting gluster network storage…" even if I stop, start… it says the same. So I'm not sure
22:29 mick271 markus_gl: still the same issue, dns names are ok right ?
22:29 markus_gl :-) it's time to try ips
22:30 mick271 ^^
22:31 markus_gl @ipalaus: if glusterfsd ist not comming up, your cannot use cli
22:32 markus_gl had these problem with a total network failure
22:32 ipalaus yep, that's what happened here I think :S
22:32 markus_gl because glusterd does not find one subvolume it does not start
22:33 ipalaus I just seen this in the status: http://paste.laravel.com/Dqu
22:33 glusterbot Title: Laravel Paste Bucket (at paste.laravel.com)
22:33 markus_gl grep for volume in gluster...log
22:35 markus_gl etc-glusterfs-glusterd.vol.log not present?
22:35 ipalaus I don't have any gluster: cli, etc-glusterfs-glusterd, etc-glusterfs-glusterfsd, etc-glusterfs-glusterfs, nfs, status and var-www-storage (my volume)
22:35 ipalaus empty :\
22:35 mick271 markus_gl:  somehow my dns records are working strangely
22:35 markus_gl please describe your setup
22:36 markus_gl rule: static ips for servers, do not use networkmanager
22:37 markus_gl yesterday I was testing network-failures :-)
22:37 MugginsM yeah, I had networkmanager switch IPs on me during setup. Ended up with a node peering itself. That was not fun to diagnose.
22:38 ipalaus markus_gl basically the sysadmin is on vacation and can't be reached so I'm trying to figure it out everything. We have 2 backend servers that share a directory, /var/www/storage that is replicated to this second. For what I can see in the stab, this is the setup: backend-01.ovh.servers.corp​.domain.tld:/storage-mirror /var/www/storage glusterfs defaults 0 0
22:38 ipalaus but obviously seems that before that, gluster service is not running at all
22:38 ipalaus also the cli.log says [2013-07-25 00:15:47.511834] E [socket.c:1685:socket_connect_finish] 0-glusterfs: connection to  failed (Connection refused) so it's obvious it's not running
22:39 markus_gl yes right
22:39 ipalaus but it's not giving an error on starting/restaring/stopping the service
22:39 ipalaus :\
22:39 markus_gl gluster-server must have one subvolume reachable
22:40 markus_gl using static ip?
22:40 ipalaus I don't really know how subvolume works, I know we have one that is storage-mirror.
22:40 JM5 joined #gluster
22:40 ipalaus yes
22:40 ipalaus but how I can confirm it? it's a dedicated server so it has an static ip
22:41 markus_gl a replica 2 consists of 2 bricks, which makes 2 subvolumes connected with a replicate translator
22:41 markus_gl thats like raid1
22:41 markus_gl with 2 disks
22:42 mick271 markus_gl:  seems like my dns caused the issue, weird
22:42 markus_gl what's about hat other peer?
22:42 markus_gl I see 2 guys with equal problems
22:43 ipalaus this is the second log: http://paste.laravel.com/DqF it can't connect to the backend-01
22:43 glusterbot Title: Laravel Paste Bucket (at paste.laravel.com)
22:45 markus_gl with environment is that? corporate network? is a firewall active?
22:45 ipalaus there is a shorewall firewall in each server, I tried to stop in and see if that way worked but it didn't
22:45 ipalaus actually i will re-test
22:46 markus_gl if it is a replica 2 you only need server reachable, as i know
22:47 ipalaus http://paste.laravel.com/DqG <- but something weird happens with backend-01, see the difference when restarting the services
22:47 glusterbot Title: Laravel Paste Bucket (at paste.laravel.com)
22:47 markus_gl both servers with dead glusterd?
22:47 ipalaus not, only first but the second is just a replica that (I think) tries to connect to the first one
22:49 markus_gl try on backend-02: gluster peer status
22:49 ipalaus peer status: No peers present
22:50 JM5 Hello, what would the expected peak, read/write performance be for a 4 node cluster with single 10Gig interfaces from each node on a distributed volume? Each node, 64GB RAM, 12xHDDs RAID 6 on LSI RAID Card, 2x 6 core intel.
22:50 markus_gl gluster volume info
22:50 markus_gl gluster volume status
22:50 ipalaus No volumes present, with each command
22:50 markus_gl bad
22:51 markus_gl so backend-02 has no config
22:51 markus_gl did you find a vol-file in /etc/glusterfs ?
22:52 ipalaus yep, I've a glusterd.vol
22:52 markus_gl please paste
22:53 ipalaus http://paste.laravel.com/DqM
22:53 glusterbot Title: Laravel Paste Bucket (at paste.laravel.com)
22:53 markus_gl ok nothing interesting
22:54 markus_gl to backend-01
22:54 markus_gl we have to bring such daemon up and runing
22:54 ipalaus ok, what can we try?
22:56 markus_gl backend-01 debian?
22:56 ipalaus yep, both
22:57 markus_gl DqF was that log from 01?
22:58 markus_gl ok found something
22:58 ipalaus not, it was from the 02
22:58 markus_gl please post all logs from 01
22:59 ipalaus ok
22:59 markus_gl your volumes name is storage-mirror
22:59 ipalaus that's correct
22:59 markus_gl but that should be listed with gluster volume info/status
23:00 markus_gl that's all strange
23:02 ipalaus http://paste.laravel.com/DqP this is all the logs I have
23:02 glusterbot Title: Laravel Paste Bucket (at paste.laravel.com)
23:03 markus_gl your vol-file is missing!
23:04 markus_gl did you have a backup?
23:04 ipalaus nope :\
23:04 markus_gl crazy setup
23:05 markus_gl 3.4 beta and 3.2.7?
23:05 ipalaus [root@backend-01 /var/log/glusterfs ]# gluster --version
23:05 ipalaus glusterfs 3.2.7
23:05 markus_gl i see
23:05 ipalaus [root@backend-02 ~ ]# gluster --version
23:05 ipalaus glusterfs 3.4.0beta3
23:05 ipalaus shit
23:05 markus_gl crap
23:06 markus_gl strange: etc/glusterfs/glusterfsd.vol: No such file or directory
23:06 markus_gl different file names: /etc/glusterfs/glusterfs.vol: No such file or directory
23:07 wushudoin left #gluster
23:07 ipalaus I only have a glusterd.vol there
23:07 markus_gl in /etc/glusterfs?
23:07 ipalaus yes
23:08 ipalaus wait i've found something
23:08 ipalaus http://paste.laravel.com/Dr1 my vols are here
23:08 glusterbot Title: Laravel Paste Bucket (at paste.laravel.com)
23:09 markus_gl it looks like you have different versions of glusterfs installed
23:10 ipalaus in the same machine?
23:10 markus_gl i have a debian with 3.4 and there is no /etc/glusterd
23:11 markus_gl you have on 01: /etc/glusterd and /etc/glusterfs?
23:11 ipalaus yes
23:11 markus_gl one moment have to check backup :-)
23:12 ipalaus thank you so much
23:13 markus_gl on every system i found only /etc/glusterfs
23:14 ipalaus there is a way to check the versions I have installed?
23:15 markus_gl locate glusterd
23:15 markus_gl i found glusterd at /var/lib
23:16 ipalaus http://paste.laravel.com/Dr7
23:16 glusterbot Title: Laravel Paste Bucket (at paste.laravel.com)
23:16 ipalaus probably i could update to 3.4?
23:16 markus_gl no please
23:17 ipalaus ok, haha
23:17 markus_gl no strange things on strange things :)
23:18 markus_gl looks like your bricks are called ... home-www-storage
23:18 markus_gl there should be your data
23:18 ipalaus the files are actually at /home/www/storage
23:18 ipalaus yes that's true
23:19 markus_gl oh cool
23:19 markus_gl cat info
23:19 markus_gl in /etc/glusterd/vols/storage-mirror/
23:20 ipalaus http://paste.laravel.com/Dr8
23:20 glusterbot Title: Laravel Paste Bucket (at paste.laravel.com)
23:21 markus_gl i miss a word like replica compared to my replica
23:22 ipalaus but the strangest thing is my service not booting no? :\
23:23 markus_gl because he did not find a volfile
23:23 ipalaus ah okay!
23:24 markus_gl please invoke: ps aux | grep [g]glusterd
23:24 markus_gl and ps aux | grep [g]glusterfsd
23:25 ipalaus nothing :\
23:25 markus_gl standby
23:28 markus_gl please paste ll /var/log/glusterfs
23:28 ipalaus from the second server?
23:29 markus_gl 01
23:29 markus_gl we need tp bring that service up
23:30 ipalaus I pasted everything in the directory here: http://paste.laravel.com/DqP
23:30 glusterbot Title: Laravel Paste Bucket (at paste.laravel.com)
23:30 ipalaus all the files I have there
23:33 markus_gl could you please cd to /etc/glusterfs/
23:33 ipalaus yes
23:33 markus_gl there is what file present?
23:34 markus_gl which file?
23:34 fidevo joined #gluster
23:34 markus_gl glusterd.vol?
23:34 ipalaus glusterd.vol
23:34 ipalaus yes
23:35 markus_gl please rename to glusterfsd.vol
23:35 markus_gl an try to start the service
23:37 JM5 left #gluster
23:37 ipalaus mmm
23:38 markus_gl ?
23:38 ipalaus nothing, there is no ok and I can't see it in the ps aux
23:38 markus_gl ok rename back
23:39 ipalaus done
23:39 markus_gl service glusterfs-server start
23:40 ipalaus #  service glusterfs-server start
23:40 ipalaus Starting gluster network storage...
23:40 ipalaus #
23:40 ipalaus [root@backend-01 /etc/glusterfs ]# ps aux | grep [g]glusterd
23:40 ipalaus [root@backend-01 /etc/glusterfs ]# ps aux | grep [g]gluster
23:40 ipalaus [root@backend-01 /etc/glusterfs ]#
23:41 markus_gl i do not understand why you have two vol.log-files
23:41 markus_gl do you like your sysadmin?
23:41 ipalaus mmm… what do you think about recreating the volume? it could be an option?
23:41 ipalaus it wasn't his fault, he found this already installed too
23:41 markus_gl ok
23:41 ipalaus it's a legacy thing :\
23:42 markus_gl it looks like
23:42 markus_gl i see we don't get it working over irc
23:43 markus_gl we know: 2 bricks
23:43 markus_gl one volume: storage-mirror
23:43 markus_gl 2 peers
23:44 markus_gl one not connected due to offline service not finding a volfile, but it is there
23:45 markus_gl but we do not know anything from that translator
23:46 markus_gl your data ist still there right?
23:46 markus_gl at least in such bricks at both servers?
23:48 ipalaus yes, thanks it still there :)
23:48 markus_gl home-www-storage
23:48 markus_gl or var-www-storage
23:48 StarBeast joined #gluster
23:48 ipalaus home-www-storage is where are the files
23:48 ipalaus var-www-storage where they are mounted
23:49 markus_gl ok good
23:49 markus_gl with is the more trusted consistent brick?
23:49 markus_gl which one?
23:49 ipalaus mmm brick is where the data actually is?
23:50 markus_gl home-www-storage
23:50 ipalaus yes, home has the file the var-www-storage is empty
23:50 markus_gl on 02 there should be .glusterfs
23:51 markus_gl and mounted on 02 at /var ...?
23:51 markus_gl fstab?
23:52 ipalaus there is a empty directory so it's not successfully mounted but yeah on both they are on the fstab
23:52 markus_gl on 02 it should be mountable
23:52 ipalaus # mount /var/www/storage
23:52 ipalaus Mount failed. Please check the log file for more details.
23:52 ipalaus and then the log says that can't connect
23:52 markus_gl noooo
23:53 ipalaus i did it wrong? ups :\
23:53 markus_gl gluster does not found a volume right?
23:53 markus_gl gluster volume info
23:53 markus_gl on 01
23:54 markus_gl tell me your fstab entry
23:54 ipalaus [root@backend-01 /var/www ]# gluster volume info
23:54 ipalaus Connection failed. Please check if gluster daemon is operational.
23:54 ipalaus [root@backend-01 /var/www ]# service glusterfs-server start
23:54 ipalaus Starting gluster network storage...
23:54 ipalaus [root@backend-01 /var/www ]# gluster volume info
23:54 ipalaus Connection failed. Please check if gluster daemon is operational.
23:54 ipalaus [root@backend-01 /var/www ]#
23:54 markus_gl ok
23:54 ipalaus backend-01.ovh.servers.corp​.domain.com:/storage-mirror /var/www/storage glusterfs defaults 0 0
23:54 markus_gl daemon is down too
23:54 ipalaus in 02 it's backend-02
23:55 markus_gl wait, we have not one daemon running?
23:55 ipalaus I tried to start but it doesn't starts
23:55 markus_gl i tought 01 is runing
23:56 ipalaus it's not :\
23:56 markus_gl what the log if you try to start at 01?
23:56 ipalaus fuck I've to go, it's already 2AM here...
23:56 Humble joined #gluster
23:56 ipalaus you will he around tomorrow?
23:57 markus_gl yes :)
23:57 markus_gl ne backend 02 is running
23:57 markus_gl right?
23:57 ipalaus yep 02 is running
23:57 ipalaus but its 3.4 there :\
23:57 markus_gl i am confused sorry
23:58 markus_gl ok please invoke on 02 the following
23:58 markus_gl gluster volume info
23:58 markus_gl gluster volume status
23:58 ipalaus [root@backend-02 ~ ]# gluster volume info
23:58 ipalaus No volumes present
23:58 ipalaus [root@backend-02 ~ ]# gluster volume status
23:58 ipalaus No volumes present
23:58 ipalaus :(
23:58 markus_gl ok that thing is fucked up
23:59 markus_gl you have to backup
23:59 markus_gl an tommorow we fix it, ok?
23:59 markus_gl -m
23:59 ipalaus perfect, I will backup tonight
23:59 ipalaus thank you so much!
23:59 markus_gl backup bricks and config please
23:59 lpabon_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary