Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-10-15

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 itisravi joined #gluster-dev
00:27 overclk joined #gluster-dev
01:41 glustin joined #gluster-dev
01:42 AndroUser2 joined #gluster-dev
02:39 riyas joined #gluster-dev
03:27 akanksha__ joined #gluster-dev
03:50 apandey joined #gluster-dev
03:50 mchangir|afk joined #gluster-dev
04:10 [o__o] joined #gluster-dev
04:53 sanoj_ joined #gluster-dev
05:26 pranithk1 joined #gluster-dev
05:26 akanksha__ Hello, I have an issue. I have setup a local gluster volume on my machine without VMs. And I am trying to run these tests https://github.com/gluster/gbench/t​ree/master/bench-tests/bt-0000-0001
05:27 akanksha__ For these what should I set my CLIENT and SERVER variable as and enable ssh
05:27 apandey joined #gluster-dev
05:31 pranithk1 apandey: completed :-)
05:31 pranithk1 apandey: will check with 3.7.x
05:32 apandey pranithk1:  hmmm
05:32 apandey pranithk1, did you change 60 sec to 600 sec?
05:33 apandey pranithk1: ^^
05:33 pranithk1 apandey: ah! that I didn't do. Good point, I will do that after the 3.7.x one
05:34 apandey pranithk1: ok. if it is 60 sec, you may not be spot it..
05:36 pranithk1 apandey: Yeah, you are correct...
05:36 gem joined #gluster-dev
05:37 pranithk1 apandey: with 3.7.15 the heal stopped after healing 100/2500 entries
05:38 pranithk1 apandey: with the latest changes it completes in first run itself, so there is no match at all :-). Let me do what you suggest and report back
05:39 apandey pranithk1: :) good
05:46 apandey pranithk1: it is not happening on 3.2.0 I agian tried with gluster tar..
05:46 apandey pranithk1: keep 600 sec and you may see the diff..
05:47 pranithk1 apandey: meaning? heals are happening or not happening?
05:47 apandey pranithk1: Not happening at one shot
05:47 pranithk1 apandey: yes, doing that now
05:47 apandey pranithk1: It is waiting for next round of scan..
05:48 apandey pranithk1: 2056 entries are pending for next round of scan..
05:49 pranithk1 apandey: Dude, it completed again :-)
05:50 apandey pranithk1: with 10 min?
05:50 pranithk1 apandey: yeah
05:51 apandey pranithk1: ok let me tell you exact steps I am doing..
05:51 pranithk1 apandey: okay
05:54 apandey pranithk1: 1 - create a 4+2 volume. 2 - kill a brick. 3 - disable self heal. 4 - untar glusterfs. 5 - restart volume using force, heal will not start. 6 - enable self heal using command gluster v heal volname enable. 7 - wait for 10 min. heal will start. 8 - keep doing heal info (I have also placed logs after ever heal_do). you will see that aftre 2-3 min of start of heal heal entries in info is not decreasing. )
05:55 spalai joined #gluster-dev
05:58 apandey pranithk1: forget about one scan. It is not completing ever after second scan :(
05:58 pranithk1 apandey: The way I am doing it is different...
05:59 apandey pranithk1: ok
06:00 sanoj_ joined #gluster-dev
06:01 apandey pranithk1: http://pastebin.test.redhat.com/421299
06:01 pranithk1 apandey: Have 3 terminals. In one terminal, create the setup and kill 2 bricks then create data
06:02 pranithk1 apandey: then in second terminal do tailf glustershd.log
06:02 apandey pranithk1: yes, I am doing this..
06:02 pranithk1 apandey: In the third terminal do watch "ls -l <brick>/.glusterfs/indices/xattrop"
06:05 riyas joined #gluster-dev
06:05 pranithk1 apandey: still not healing?
06:06 apandey pranithk1: yes, even after second scan 911 entries are pending..
06:32 spalai joined #gluster-dev
07:11 gem joined #gluster-dev
07:40 msvbhat joined #gluster-dev
08:00 s-kania joined #gluster-dev
08:10 msvbhat joined #gluster-dev
08:25 suliba_ joined #gluster-dev
08:26 kkeithle joined #gluster-dev
08:27 soumya_ joined #gluster-dev
08:33 jtc` joined #gluster-dev
08:46 pranithk1 joined #gluster-dev
08:51 pkalever joined #gluster-dev
09:29 rraja joined #gluster-dev
09:33 Acinonyx joined #gluster-dev
09:45 pkalever joined #gluster-dev
10:35 mchangir|afk joined #gluster-dev
10:52 gem joined #gluster-dev
10:59 ankitraj joined #gluster-dev
11:52 spalai joined #gluster-dev
11:59 dlambrig_ joined #gluster-dev
12:30 shyam joined #gluster-dev
12:33 luizcpg joined #gluster-dev
13:38 luizcpg joined #gluster-dev
14:17 luizcpg joined #gluster-dev
14:26 spalai joined #gluster-dev
14:40 pranithk1 joined #gluster-dev
14:41 msvbhat joined #gluster-dev
15:32 lpabon joined #gluster-dev
15:35 gem joined #gluster-dev
15:50 anrao joined #gluster-dev
15:51 spalai joined #gluster-dev
15:59 anrao_ joined #gluster-dev
16:01 anrao_ I'm getting the following error while trying to "sudo make install" on the gluster src code, every other commands before run just fine
16:01 anrao_ error is :/usr/bin/install: cannot stat ‘glustereventsd-Debian’: No such file or directory
16:01 anrao_ make[3]: *** [Debian] Error 1
16:01 anrao_ make[2]: *** [install-am] Error 2
16:01 anrao_ make[1]: *** [install-recursive] Error 1
16:01 anrao_ make: *** [install-recursive] Error 1
16:02 anrao_ nigelb: can you please help ?
16:02 anrao_ kshlm: you there ? can you check the above ^^ error
16:19 anrao_ kshlm++
16:19 glusterbot anrao_: kshlm's karma is now 116
16:30 semiosis joined #gluster-dev
16:30 semiosis joined #gluster-dev
16:30 tdasilva joined #gluster-dev
17:11 spalai joined #gluster-dev
17:17 msvbhat joined #gluster-dev
17:17 gem joined #gluster-dev
17:51 spalai left #gluster-dev
19:40 dlambrig_ joined #gluster-dev
20:16 akanksha__ joined #gluster-dev
21:14 gem joined #gluster-dev
21:43 hagarth joined #gluster-dev
21:54 gem joined #gluster-dev
22:09 anrao_ joined #gluster-dev
22:14 Menaka joined #gluster-dev
22:52 gem joined #gluster-dev
22:55 Menaka Hello, Please help me in understanding this behaviour. I am running Smallfile distributed I/O benchmark using the follwoing command.  python /root/smallfile/smallfile_cli.py --top /home/me/Documents/data --remote-pgm-dir /root/smallfile/ --host-set host --threads 8 --file-size 64 --files 10000 --response-times Y
22:56 Menaka Though I am running it as the root user in the host machine itslef, its asking for the root password. But why ?
22:58 Menaka @nigelb and @shyam Kindly help. When i run the bench script, it failed. So I tested running the command and it gave the expected result but after entering the password which should be the reason for the failure of the bench script. But I have already configured passwordless ssh across CLIENTS and SERVERS and was able to successfully run the IOZone part of the bench script.
23:09 gem joined #gluster-dev
23:45 gem joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary