Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-03-02

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:26 hagarth joined #gluster-dev
01:08 itisravi joined #gluster-dev
01:09 itisravi can some one with commit access merge http://review.gluster.org/#/c/13535/ ?
01:09 itisravi okthxbye
01:09 EinstCrazy joined #gluster-dev
01:29 xavih joined #gluster-dev
01:55 baojg joined #gluster-dev
02:07 baojg joined #gluster-dev
02:15 baojg joined #gluster-dev
02:48 ilbot3 joined #gluster-dev
02:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
03:29 hchiramm joined #gluster-dev
03:38 baojg joined #gluster-dev
03:42 nishanth joined #gluster-dev
03:53 shubhendu joined #gluster-dev
03:55 sakshi joined #gluster-dev
03:59 itisravi joined #gluster-dev
04:10 pranithk joined #gluster-dev
04:18 mchangir joined #gluster-dev
04:22 kanagaraj joined #gluster-dev
04:33 purnima joined #gluster-dev
04:50 gem joined #gluster-dev
04:51 gem_ joined #gluster-dev
04:51 kshlm joined #gluster-dev
05:03 ppai joined #gluster-dev
05:04 pppp joined #gluster-dev
05:06 baojg joined #gluster-dev
05:08 kasturi joined #gluster-dev
05:12 aravindavk joined #gluster-dev
05:16 ndarshan joined #gluster-dev
05:18 aspandey joined #gluster-dev
05:20 poornimag joined #gluster-dev
05:20 Manikandan joined #gluster-dev
05:24 spalai joined #gluster-dev
05:33 jiffin joined #gluster-dev
05:37 overclk joined #gluster-dev
05:40 hgowtham joined #gluster-dev
05:45 itisravi joined #gluster-dev
05:46 Apeksha joined #gluster-dev
05:51 spalai joined #gluster-dev
05:53 rraja joined #gluster-dev
05:54 kshlm joined #gluster-dev
05:54 Bhaskarakiran joined #gluster-dev
05:55 ashiq joined #gluster-dev
06:01 vmallika joined #gluster-dev
06:02 vimal joined #gluster-dev
06:12 gem joined #gluster-dev
06:17 asengupt joined #gluster-dev
06:18 nishanth joined #gluster-dev
06:18 atalur joined #gluster-dev
06:22 shubhendu joined #gluster-dev
06:27 kdhananjay joined #gluster-dev
06:31 ggarg joined #gluster-dev
06:32 sakshi joined #gluster-dev
06:32 sakshi joined #gluster-dev
06:38 karthikfff joined #gluster-dev
06:40 aspandey joined #gluster-dev
06:41 hchiramm joined #gluster-dev
06:45 spalai joined #gluster-dev
07:13 pranithk joined #gluster-dev
07:23 shubhendu joined #gluster-dev
07:24 atalur joined #gluster-dev
07:25 mchangir atalur, would be able to help me with a volume mount time message regarding AFR?
07:26 nishanth joined #gluster-dev
07:27 atalur mchangir, sure. could you elaborate?
07:27 mchangir atalur, E [MSGID: 108040] [afr.c:414:init] 0-earth-replicate-1: Unable to fetch afr pending changelogs. Is op-version >= 30707? [Invalid argument]
07:28 josferna joined #gluster-dev
07:29 atalur mchangir, http://www.gluster.org/pipermail/gluster-devel/2016-February/048108.html
07:30 mchangir atalur, I'l take a look
07:30 atalur mchangir, I think an update should help. this problem was fixed in 3.7.8
07:30 pranithk joined #gluster-dev
07:33 mchangir atalur, I'm with upstream master pulled last evening
07:33 atalur mchangir, what is the op-version?
07:34 post-factum probably, gluster volume set all cluster.op-version 30707
07:34 post-factum ?
07:34 atalur yes.. should do ^^
07:34 mchangir atalur, oh!
07:37 mchangir atalur, the op-version setting command succeeded ... but still getting the same message on attempting a mount
07:37 mchangir atalur, do I need to restart glusterd?
07:40 mchangir post-factum, do I need to re-create volume after setting cluster.op-version to 30707?
07:46 post-factum nope
07:46 post-factum you do not need to recreate volume
07:46 post-factum do not know about restart, however. afaik, no as well
07:55 mchangir post-factum, re-create helped ... thanks
07:56 post-factum that is weird
08:00 purnima joined #gluster-dev
08:13 pranithk joined #gluster-dev
08:39 atalur mchangir, oops. sorry. had gone for lunch. but glusterd restart should have helped from my understanding
08:50 pranithk joined #gluster-dev
08:51 Bhaskarakiran joined #gluster-dev
08:58 baojg joined #gluster-dev
09:27 baojg joined #gluster-dev
09:27 post-factum joined #gluster-dev
09:48 nbalacha joined #gluster-dev
09:58 mchangir joined #gluster-dev
10:07 nbalacha joined #gluster-dev
10:21 baojg joined #gluster-dev
10:39 pranithk joined #gluster-dev
10:52 Bhaskarakiran_ joined #gluster-dev
11:06 mchangir joined #gluster-dev
11:17 skoduri joined #gluster-dev
11:17 Bhaskarakiran joined #gluster-dev
11:29 purnima joined #gluster-dev
11:42 ira joined #gluster-dev
11:43 Bhaskarakiran joined #gluster-dev
11:45 skoduri joined #gluster-dev
11:57 kshlm Weekly Gluster community meeting is starting soon in #gluster-meeting
11:59 josferna_ joined #gluster-dev
12:01 lpabon joined #gluster-dev
12:03 Saravanakmr joined #gluster-dev
13:04 baojg joined #gluster-dev
13:19 josferna_ joined #gluster-dev
13:31 baojg joined #gluster-dev
13:35 mchangir kshlm, I've got a glusterd query
13:35 kshlm mchangir, Sure. Go ahead.
13:35 mchangir kshlm, after hitting the "is op-version >= 30707" issue, I set cluster.op-version to 30707 and attempted to restart glusterd ... but glusterd doesn't come up ... any pointers to get it up ... I've even tried manually resetting operating-version in glusterd.info to 30603 ... but glusterd still doesn't come up
13:35 post-factum mchangir: some logs would be ok
13:36 kshlm The glusterd logs should have some information.
13:36 mchangir should I paste them here?
13:36 kshlm post-factum, Thanks :)
13:36 post-factum @paste
13:36 kshlm mchangir, @paste
13:36 post-factum hmm, no bot here?
13:36 post-factum wtf
13:36 kshlm You keep beating me to everything.
13:37 mchangir hehe
13:37 post-factum :D
13:37 kshlm glusterbot takes a nap everyonce in a while.
13:37 kshlm glusterbot, wake up!
13:37 post-factum mchangir: use cat /some/log | nc termbin.com 9999
13:37 post-factum or fpaste.org via web
13:37 mchangir ah, that's better
13:38 post-factum glusterbot-- you are sleeping at work!
13:38 glusterbot post-factum: glusterbot's karma is now 2
13:42 mchangir kshlm, http://ur1.ca/oldwd
13:45 kshlm mchangir, 151: [2016-03-02 11:17:01.648022] D [MSGID: 0] [glusterd-utils.c:1002:glusterd_resolve_brick] 0-management: Returning -1
13:45 kshlm This is what you have to focus on.
13:45 mchangir ok
13:46 kshlm Glusterd is not able to resolve brick hostnames.
13:46 post-factum dns issue?
13:46 kshlm Is the network up correctly?
13:46 mchangir I have IP addresses in /etc/hosts ... iptables -F
13:48 kshlm Can you check if the brick addresses for volume `earth` are all resolvable?
13:48 mchangir I faced this issue in the morning and was advised to set cluster.op-version to 30707 ... I did so ... but had to re-create my volume and then things  worked fine until I killed and attempted to restart glusterd
13:49 kshlm That is different from what you're facing now.
13:49 mchangir ok
13:49 kshlm Can you check if `node2` resolves correctly?
13:49 mchangir hold on ... I'll check
13:50 mchangir [root@node1 glusterfs]# ping node2
13:50 mchangir PING node2 (192.168.122.132) 56(84) bytes of data.
13:50 mchangir 64 bytes from node2 (192.168.122.132): icmp_seq=1 ttl=64 time=0.590 ms
13:50 mchangir 64 bytes from node2 (192.168.122.132): icmp_seq=2 ttl=64 time=0.422 ms
13:50 mchangir 64 bytes from node2 (192.168.122.132): icmp_seq=3 ttl=64 time=0.465 ms
13:50 mchangir 64 bytes from node2 (192.168.122.132): icmp_seq=4 ttl=64 time=1.01 ms
13:50 mchangir ^C
13:50 mchangir --- node2 ping statistics ---
13:50 glusterbot mchangir: -'s karma is now -6
13:50 glusterbot mchangir: -'s karma is now -7
13:50 mchangir 4 packets transmitted, 4 received, 0% packet loss, time 3000ms
13:50 mchangir rtt min/avg/max/mdev = 0.422/0.624/1.019/0.236 ms
13:50 mchangir [root@node2 ~]# ping node1
13:50 mchangir PING node1 (192.168.122.131) 56(84) bytes of data.
13:50 mchangir 64 bytes from node1 (192.168.122.131): icmp_seq=1 ttl=64 time=0.590 ms
13:51 mchangir 64 bytes from node1 (192.168.122.131): icmp_seq=2 ttl=64 time=0.457 ms
13:51 mchangir 64 bytes from node1 (192.168.122.131): icmp_seq=3 ttl=64 time=0.585 ms
13:51 mchangir 64 bytes from node1 (192.168.122.131): icmp_seq=4 ttl=64 time=0.473 ms
13:51 kshlm mchangir, Try not to paste console output to IRC.
13:51 mchangir ^C
13:51 mchangir --- node1 ping statistics ---
13:51 glusterbot mchangir: -'s karma is now -8
13:51 mchangir 4 packets transmitted, 4 received, 0% packet loss, time 3002ms
13:51 glusterbot mchangir: -'s karma is now -9
13:51 mchangir rtt min/avg/max/mdev = 0.457/0.526/0.590/0.063 ms
13:51 kshlm It could lead to you getting kicked out sometimes.
13:51 mchangir kshlm, noted that
13:51 kshlm Use a paste service.
13:51 kshlm Next up. Check if all your peerinfo files are good.
13:51 mchangir which are those?
13:53 kshlm `/var/lib/glusterd/peers/*`
13:53 ppai joined #gluster-dev
13:53 mchangir there's only one
13:55 kshlm Does it have anything in it?
13:55 kshlm Or is it empty?
13:56 mchangir uuid=480f22cb-6fa1-4421-8811-8b418762c5eb ... state=3 ... hostname1=node2
13:57 mchangir btw, I just noticed that op-version=30704 in /var/lib/glusterd/vols/earth/info
13:57 kshlm I'm not sure if that would cause a failure. But try setting down to whatever is in glusterd.info
13:58 mchangir ok ... but will the contents of the cksum file have anything to do with the contents of the info file
13:59 jiffin1 joined #gluster-dev
13:59 kshlm Yes. I forgot about that.
13:59 kshlm Bump up op-version in glusterd.info then.
13:59 kdhananjay joined #gluster-dev
13:59 mchangir I'll match it to 30704 (as per info file)
14:02 shyam joined #gluster-dev
14:04 mchangir kshlm, no luck with setting /var/lib/glusterd/glusterd.info operating-version=30704
14:06 EinstCrazy joined #gluster-dev
14:13 kshlm mchangir, Still the same failure?
14:14 shubhendu joined #gluster-dev
14:15 mchangir kshlm, yup ... even tried setting operating-version to 30707
14:15 mchangir kshlm, haven't touched info file though
14:18 mchangir kshlm, also contents of cksum file are not trivial ... output of "cksum info" command doesn't match with contents of cksum file
14:18 kshlm mchangir, Yes. Glusterd uses its own cksum routine.
14:19 kshlm It sorts the file, and then applies a checksum routine.
14:19 mchangir kshlm, ok
14:19 kshlm In anycase, the chekcsum is only used when importing volinfo from another node. Not during restore from disk.
14:21 mchangir kshlm, should I try changing the op-verison in the info file then?
14:21 kshlm Sure.
14:21 mchangir ok, here goes
14:23 mchangir kshlm, no luck
14:28 kshlm mchangir, Is the other node running?
14:29 kshlm Just a weird guess.
14:29 mchangir kshlm, well, running the sense it can be pinged ... but has the same glusterd problem ... glusterd not starting up
14:29 kshlm In `glusterd_restore` (on master) I see that volumes are restored before peers.
14:30 kshlm But restoring volumes involves doing a brick_resolve.
14:30 kshlm But brick_resolve needs the peers restored to succeed.
14:31 kshlm I don't even know how this worked till now.
14:31 kshlm Only reason I can think of is that the peer list was populated because of a import from another peer, which happened parallely to the restore.
14:32 kshlm Maybe some new change happened somewhere in the restore call stack.
14:32 kshlm I need to figure it out.
14:32 mchangir kshlm, does that mean the peer nodes need to have started their glusterd before glusterd on the primary node is started
14:33 kshlm I'm just guessing.
14:33 mchangir kshlm, oops ... you know better
14:36 kshlm mchangir, This is a new bug caused by Atin's change a60c39de
14:36 kshlm This is only on master for now.
14:36 kshlm Please open a bug so that we track this.
14:37 shubhendu joined #gluster-dev
14:37 kshlm The commit I mentioned, added a resolve_brick call in the restore_bricks path. That shouldn't happen.
14:37 kshlm Earlier, the bricks were restored, and brick resolve happened after the peers were restored.
14:37 mchangir kshlm, ok, I'll open a bug
14:38 kshlm For now, you can revert a60c39de and continue your testing.
14:38 kshlm That is not an essential fix.
14:38 kshlm mchangir, Add atin to the bug. He should pick it up when he comes back tomorrow.
14:39 kshlm I'll be logging off now. Email the mailing-lists if you need any
14:40 kshlm ...if you need any further help.
14:40 mchangir kshlm, thanks
14:43 kanagaraj joined #gluster-dev
14:49 shyam joined #gluster-dev
14:54 ggarg joined #gluster-dev
14:56 pranithk joined #gluster-dev
15:02 kdhananjay joined #gluster-dev
15:04 Bhaskarakiran joined #gluster-dev
15:21 dlambrig joined #gluster-dev
15:25 shaunm joined #gluster-dev
15:26 raghu joined #gluster-dev
15:31 kshlm joined #gluster-dev
15:32 pranithk joined #gluster-dev
15:33 raghu kshlm: there?
15:33 kshlm Yeah
15:33 kshlm Are you raghub or raghug?
15:34 kshlm Change your nick :p
15:34 raghu kshlm: I am raghub :)
15:34 pranithk joined #gluster-dev
15:35 raghu kshlm: I have made 3.6.9. I am waiting for the RPMS to make the announcement
15:35 kshlm Good to know.
15:35 kshlm That's what I assumed in the meeting today.
15:38 post-factum excuse me my ignorance, but who is Ravishankar N here? there is one small question regarding http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
15:39 kdhananjay post-factum: it is itisravi
15:39 kdhananjay post-factum: he's not online at the moment
15:39 post-factum i see, thanks
15:40 post-factum the question itself is, why arbiter volume size estimation is number_of_files × 4096?
15:40 post-factum is that related to max xattr size?
15:41 kdhananjay pranithk would know
15:41 kdhananjay pranithk: ^^
15:43 pranithk post-factum: Yes, the file needs to store xattrs. Apart from that there is scope for extra directories under .glusterfs and the space occupied by indices directory....
15:43 pranithk post-factum: So the size that is needed should only be to store the skeleton...
15:43 kshlm Is it 4096 on all filesystems?
15:44 post-factum and should those 4096 be considered to be 100% used?
15:44 kshlm Because AFAIK only some limit xattrs to 4096. (size of 1 fs block).
15:44 kshlm XFS should allow larger sizes.
15:45 post-factum i mean, usually, to estimate some size, i add 30–50% "just in case"
15:45 pranithk post-factum: No it is more to do with storing afr-xattrs. But the minimum it was allocating when we tested was 4096
15:45 pranithk post-factum: that is the reason it is 4096...
15:45 post-factum so, saying i have 14M of files and givet I'd like to reserve 50% just in case, I need ~80G arbiter?
15:46 post-factum *given
15:46 post-factum 14*1.5*4
15:47 post-factum the reason is that there are 2 options available for us: 75G and 150G, and I see we'd stick to 150
15:47 post-factum that is why I'm asking about full 4096 allocation
15:49 pranithk post-factum: Do you want to test this with a small set of data and see?
15:49 pranithk post-factum: This is one area we would like to hear from users...
15:49 post-factum i will definitely do that
15:50 pranithk kdhananjay++ post-factum++
15:50 glusterbot pranithk: kdhananjay's karma is now 14
15:50 glusterbot pranithk: post-factum's karma is now 2
15:50 post-factum but that is going to be used in production only if i could attach arbiter brick to existing volume :)
15:50 post-factum and i couldn't do that now, afaik
15:50 post-factum so, yes, only test setup
15:50 pranithk post-factum: it is in the plan...
15:51 pranithk post-factum: By the time you find out the details we should have that feature ready I hope :-)
15:51 post-factum pranithk: no technical/architecture issues with that, I hope?
15:51 pranithk post-factum: you mean adding arbiter to existing replica?
15:51 post-factum yep
15:51 pranithk post-factum: well I am under the impression it is piece of cake. If something scary comes up let's see ;-)
15:52 post-factum i guess it is the matter of trickery with healing :)
15:53 post-factum so sounds good
15:53 Bhaskarakiran joined #gluster-dev
15:53 pranithk post-factum: :-). Okay, will be off now...
15:53 post-factum thanks for info!
15:53 post-factum pranithk++
15:53 glusterbot post-factum: pranithk's karma is now 43
15:54 pranithk post-factum: cya! We would love to hear your feedback...
15:58 shyam1 joined #gluster-dev
16:01 josferna joined #gluster-dev
16:03 dlambrig joined #gluster-dev
16:12 itisravi joined #gluster-dev
16:14 pur joined #gluster-dev
16:20 Chr1st1an joined #gluster-dev
16:33 ira_ joined #gluster-dev
16:43 hchiramm joined #gluster-dev
16:46 Bhaskarakiran joined #gluster-dev
16:52 dlambrig joined #gluster-dev
17:01 pranithk joined #gluster-dev
17:04 aspandey joined #gluster-dev
17:16 rraja_ joined #gluster-dev
17:23 shyam joined #gluster-dev
17:36 rraja_ joined #gluster-dev
17:50 aspandey joined #gluster-dev
18:00 rafi joined #gluster-dev
18:40 penguinRaider joined #gluster-dev
18:48 dlambrig left #gluster-dev
19:03 jiffin joined #gluster-dev
19:16 penguinRaider joined #gluster-dev
19:21 penguinRaider joined #gluster-dev
20:05 shaunm joined #gluster-dev
23:03 shyam joined #gluster-dev
23:54 penguinRaider joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary