Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 partner joined #gluster
00:29 ninkotech joined #gluster
00:39 CyrilPeponnet joined #gluster
00:48 kminooie joined #gluster
00:54 kminooie @topic
00:54 glusterbot kminooie: Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:09 T3 joined #gluster
01:29 theron joined #gluster
01:42 nitro3v joined #gluster
01:46 nitro3v_ joined #gluster
01:47 nitro3v__ joined #gluster
02:23 nitro3v joined #gluster
02:26 sputnik13 joined #gluster
02:30 theron joined #gluster
02:31 bala joined #gluster
02:41 wkf joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:49 DV joined #gluster
03:33 T3 joined #gluster
03:34 theron joined #gluster
03:50 wkf joined #gluster
04:04 sputnik13 joined #gluster
04:18 theron joined #gluster
04:52 anoopcs joined #gluster
04:55 anoopcs joined #gluster
05:02 kminooie left #gluster
05:13 harish joined #gluster
05:44 cfeller joined #gluster
05:55 sputnik13 joined #gluster
06:23 theron joined #gluster
06:40 bala joined #gluster
07:10 bitpushr joined #gluster
07:12 maveric_amitc_ joined #gluster
07:19 sputnik13 joined #gluster
07:24 theron joined #gluster
07:36 Philambdo joined #gluster
07:42 ndarshan joined #gluster
07:45 kovshenin joined #gluster
08:11 javi404 joined #gluster
08:26 LebedevRI joined #gluster
08:32 T3 joined #gluster
08:51 maveric_amitc_ Anybody there ?
08:51 maveric_amitc_ I have a query
08:54 maveric_amitc_ semiosis, Hey
08:54 maveric_amitc_ semiosis, you there ?
08:55 badone_ joined #gluster
08:59 soumya joined #gluster
09:13 theron joined #gluster
09:21 ekuric joined #gluster
09:27 theron joined #gluster
09:34 misc joined #gluster
09:48 kovshenin joined #gluster
09:54 afics joined #gluster
10:43 SOLDIERz joined #gluster
10:44 SOLDIERz joined #gluster
10:51 theron joined #gluster
11:11 kovshenin joined #gluster
11:12 kovshenin joined #gluster
11:22 sage_ joined #gluster
11:38 sage joined #gluster
11:54 rjoseph joined #gluster
12:00 Folken_ maveric_amitc_: dont say you have a question, just ask it
12:01 telmich joined #gluster
12:03 mat1010 left #gluster
12:10 telmich in a 2 node gluster cluster (Replicate, 1 x 2 = 2) the other server failed
12:10 telmich now the volume is offline; can I somehow get it online without the other server?
12:10 telmich (using 3.6.2)
12:17 Folken_ telmich: how are you trying to moun tit?
12:17 Folken_ mount it I meant
12:19 Folken_ I'm not a guru but I'll help if I can
12:25 mat1010 joined #gluster
12:40 theron joined #gluster
12:55 theron joined #gluster
13:17 bala joined #gluster
13:35 azar joined #gluster
13:35 azar How can I achieve full path of a file in glusterfs source code, suppose that I am in crypt xlator? Is there any way?
13:36 telmich Folken_: it is mounted at another server using -t glusterfs; when I try to access files, it says "transport endpoint not connected"
13:37 Folken_ so it does mount
13:37 Folken_ and you can see files
13:37 Folken_ but some files are not opening?
13:37 telmich Folken_: it has been mounted, but I cannot access files anymore
13:37 telmich Folken_: the problem is that even though it is replicated, it failed with one node being offline
13:38 Folken_ telmich: can you see the files on the workign brick manually
13:40 telmich Folken_: the problem I am solving is to ensure that the cluster never gets in a unusable state, not to find the files -- this gluster volume is hosting various virtual machines with the gluster mount, which all hang now, because of the mount problem
13:41 Folken_ telmich: if I was you, I'd first confirm that replication was infact happening and that the files are on the working brick
13:42 telmich Folken_: it used to work, I have verified that before
13:43 Folken_ gluster peer status show the failed brick disconnected?
13:45 telmich Folken_: yes
13:56 theron joined #gluster
14:02 Folken_ telmich: whats the name of your mount point, what command did you use to mount it
14:02 Folken_ telmich: have you tired to unmount/remount
14:02 Folken_ telmich: I have no idea what wrong, just making suggestions as everybody else is asleep
14:03 telmich Folken_: cannt remount, as the VMs are still running
14:05 Folken_ ok, whats your mountpoint look like
14:05 Folken_ darkriot:/datapoint        22T   16T  5.8T  74% /mnt
14:07 telmich Folken_: vmhost1-cluster1.place4.ungleich.ch:/cluster1 on /var/lib/one/datastores/100 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default​_permissions,allow_other,max_read=131072)
14:07 telmich df -h does not return information anymore
14:08 telmich I guess that the problem is that there is no quorom possible at the moment
14:08 telmich however trying to gluster peer probe another host results in peer probe: failed: Probe returned with unknown errno 107
14:12 Folken_ which host is down?
14:14 telmich [15:11:41] vmhost2-cluster1:~# gluster peer probe entrance.place4.ungleich.ch
14:14 telmich peer probe: failed: Cluster quorum is not met. Changing peers is not allowed in this state
14:14 telmich nice
14:20 Folken_ so your mount point is to vmhost1-cluster1 which is down? is that correct?
14:22 telmich Folken_: vmhost1-cluster1 and vmhost2-cluster1 are in a replica setup; the volfile was loaded from vmhost1-cluster1, yes
14:24 social joined #gluster
14:24 Folken_ so if vmhost1-cluster1 is offline your mountpoint wont work, you'll need to dismount and remount using vmhost2-cluster1
14:26 telmich gluster volume set cluster1 cluster.server-quorum-type none; gluster peer probe entrance.place4.ungleich.ch
14:26 telmich this fixed the problem of not being able to add another peer
14:27 Folken_ that I dont know...
14:28 theron joined #gluster
14:44 telmich now the filesystem is readonly, but at least visible
14:55 azar left #gluster
15:12 pelox joined #gluster
15:44 theron joined #gluster
15:58 theron joined #gluster
15:59 crashmag joined #gluster
16:14 rotbeard joined #gluster
16:25 sprachgenerator joined #gluster
16:38 sprachgenerator joined #gluster
16:42 sputnik13 joined #gluster
16:52 chirino joined #gluster
17:01 crashmag joined #gluster
17:13 awerner joined #gluster
17:20 hagarth joined #gluster
17:29 hani left #gluster
17:41 hagarth joined #gluster
17:43 theron joined #gluster
17:55 al joined #gluster
17:57 lifeofgu_ joined #gluster
17:57 lifeofgu_ hi all
17:58 lifeofgu_ how can I check the replication status? Everything seems to work, but when I check the volume path (e.g. where everything should be stored) I find the files only on one server
17:58 lifeofgu_ theoretically it should replicate the files physically on both, correct?
18:02 awerner left #gluster
18:17 lifeofgu_ if there were at least a command to check what the replication status is
18:30 chirino joined #gluster
18:31 kovshenin joined #gluster
18:35 pelox joined #gluster
18:37 Philambdo joined #gluster
18:44 theron joined #gluster
18:56 theron_ joined #gluster
19:18 nitro3v joined #gluster
19:46 lnr joined #gluster
19:58 wkf joined #gluster
20:11 brad[] joined #gluster
20:14 brad[] joined #gluster
20:17 brad[] joined #gluster
20:21 calum_ joined #gluster
20:29 DV joined #gluster
20:32 lnr left #gluster
20:45 theron joined #gluster
21:01 mikemol joined #gluster
21:15 skroz joined #gluster
21:18 purpleidea ndevos: re: http://blog.nixpanic.net/2015/03/auto​matically-subscribe-rhel-systems.html have you seen: https://ttboj.wordpress.com/2015/02/23/build​ing-rhel-vagrant-boxes-with-vagrant-builder/ HTH
21:18 T3 joined #gluster
21:19 wkf joined #gluster
21:20 yosafbridge joined #gluster
21:21 wkf joined #gluster
21:21 skroz hi.  is significantly degrated IO expected if a member host containing a brick in a replicated or dispersed volume is rebooted?
21:49 SOLDIERz joined #gluster
21:55 ndevos purpleidea: yeah, I've seen that, but I dont use vagrant :)
21:57 ndevos I actually use conserver to start a prepared vm, and use pxe boot with output over (remote) serial console to pick a distribution to install
21:58 ndevos after that, some ansible playbooks take care of most things
22:01 theron joined #gluster
22:06 badone_ joined #gluster
22:07 uebera|| joined #gluster
22:25 sputnik13 joined #gluster
22:33 calum_ joined #gluster
22:50 SOLDIERz joined #gluster
22:52 elico joined #gluster
23:18 diegows joined #gluster
23:22 mat1010 joined #gluster
23:27 mat1010 joined #gluster
23:36 RicardoSSP joined #gluster
23:36 RicardoSSP joined #gluster
23:42 Thexa4 joined #gluster
23:44 purpleidea ndevos: cool, no worries :) but _do_ try the vagrant, it speeds up things a lot because you don't have to wait for kickstart to finish
23:45 purpleidea each time
23:45 purpleidea hth
23:48 Thexa4 hi, I'm trying to migrate data to gluster from a raid1 system and want to store it on a distributed volume. Unfortunately I only have enough storage for one (replicated) copy. With my raid system I could temporarily set one of the disks as missing which enabled me to have two copies at the same time. Is there a solution like this I can use for my migration to gluster?
23:50 theron joined #gluster
23:51 SOLDIERz joined #gluster
23:52 mator joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary