Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:47 Pintomatic joined #gluster
00:57 kotreshhr joined #gluster
00:57 kotreshhr left #gluster
01:02 swebb joined #gluster
01:06 johnmilton joined #gluster
01:21 Lee1092 joined #gluster
01:22 Wojtek joined #gluster
01:24 EinstCrazy joined #gluster
01:31 jiffin joined #gluster
01:40 mowntan joined #gluster
01:41 mowntan joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:53 haomaiwang joined #gluster
01:59 pdrakeweb joined #gluster
02:01 haomaiwang joined #gluster
02:19 hi11111 joined #gluster
02:23 rafi joined #gluster
02:51 poornimag joined #gluster
03:00 newdave joined #gluster
03:01 haomaiwang joined #gluster
03:09 skoduri joined #gluster
03:15 luizcpg_ joined #gluster
03:17 luizcpg joined #gluster
03:24 julim joined #gluster
03:31 nthomas joined #gluster
03:40 aravindavk joined #gluster
03:48 yawkat joined #gluster
03:48 moss joined #gluster
03:49 Siavash___ joined #gluster
04:01 haomaiwang joined #gluster
04:12 nehar joined #gluster
04:15 ashiq joined #gluster
04:21 RameshN joined #gluster
04:28 ppai joined #gluster
04:33 shubhendu_ joined #gluster
04:43 shubhendu_ joined #gluster
04:44 Manikandan joined #gluster
04:50 gowtham_ joined #gluster
04:55 sakshi joined #gluster
05:01 haomaiwang joined #gluster
05:11 ndarshan joined #gluster
05:12 DV_ joined #gluster
05:13 prasanth joined #gluster
05:22 gem joined #gluster
05:23 karthik___ joined #gluster
05:28 aspandey joined #gluster
05:30 aravindavk joined #gluster
05:34 Bhaskarakiran joined #gluster
05:41 Lee1092 joined #gluster
05:42 hchiramm joined #gluster
05:43 level7 joined #gluster
05:45 Apeksha joined #gluster
05:45 harish_ joined #gluster
05:48 hgowtham joined #gluster
06:01 haomaiwang joined #gluster
06:04 kdhananjay joined #gluster
06:13 ppai joined #gluster
06:18 overclk joined #gluster
06:23 Siavash___ joined #gluster
06:23 null_ joined #gluster
06:23 voidspacexyz joined #gluster
06:29 voidspacexyz joined #gluster
06:30 harish_ joined #gluster
06:34 karnan joined #gluster
06:34 ppai joined #gluster
06:34 mkzero joined #gluster
06:41 atalur joined #gluster
06:43 voidspacexyz1 joined #gluster
06:44 gigzbyte joined #gluster
06:45 mkzero left #gluster
06:45 Wizek__ joined #gluster
06:45 Wizek joined #gluster
06:48 jvandewege_ joined #gluster
06:49 pur__ joined #gluster
06:50 armyriad joined #gluster
06:50 nthomas joined #gluster
06:52 armyriad joined #gluster
06:54 RameshN joined #gluster
06:54 karnan joined #gluster
06:57 jvandewege joined #gluster
07:01 haomaiwang joined #gluster
07:02 ramky joined #gluster
07:09 aspandey joined #gluster
07:11 Manikandan joined #gluster
07:13 jri joined #gluster
07:21 voidspacexyz1 joined #gluster
07:30 karnan joined #gluster
07:32 Wizek joined #gluster
07:34 Gnomethrower joined #gluster
07:35 ahino joined #gluster
07:35 Manikandan joined #gluster
07:37 ctria joined #gluster
07:44 ctria joined #gluster
07:49 deniszh joined #gluster
08:01 haomaiwang joined #gluster
08:11 kovshenin joined #gluster
08:16 karthik___ joined #gluster
08:19 Manikandan_ joined #gluster
08:20 Slashman joined #gluster
08:29 karnan joined #gluster
08:30 refj joined #gluster
08:32 refj I have a distributed replicated 2x2 setup. A recent breakdown has left a single file in input/output error state (split-brain). This file is not listed with the heal info or heal info split-brain. When mounted locally with nfs on the glusterfs nodes it is found on all gluster nodes except one. How do I fix this when the file is not listed with the heal command?
08:34 newdave joined #gluster
08:37 Gnomethrower joined #gluster
08:39 Gnomethrower joined #gluster
08:40 Gnomethrower joined #gluster
08:42 Gnomethrower joined #gluster
08:50 shubhendu_|lunch joined #gluster
08:58 k4n0 joined #gluster
09:01 haomaiwang joined #gluster
09:05 ahino joined #gluster
09:14 mpietersen joined #gluster
09:17 post-factum if i'd like to use 1 host as glusterd node (with no bricks, just for volume metadata), is that sufficient to open tcp/24007 only?
09:17 ahino joined #gluster
09:18 post-factum JoeJulian: ^^
09:30 level7_ joined #gluster
09:36 ahino joined #gluster
09:39 glafouille joined #gluster
09:50 ctria joined #gluster
09:53 partner IMO yes
09:54 paul98 joined #gluster
09:58 ppai joined #gluster
10:01 haomaiwang joined #gluster
10:03 marcoc_ joined #gluster
10:08 paul98 hi, is it possible to have one partion but then have more then one volume mapped to that parition?
10:09 micke paul98: if you have LVM on top of you partition it should work.
10:13 hackman joined #gluster
10:13 paul98 yup i do
10:14 paul98 so when i create the volume do i tell it how big it should be?
10:14 arcolife joined #gluster
10:16 Manikandan_ joined #gluster
10:20 bfoster joined #gluster
10:24 partner its the sized of the lvm volume, that makes your brick
10:24 paul98 ah ok
10:24 paul98 so i need to make the lvm volume the righ size
10:24 partner yeah
10:24 paul98 then i can make the brick
10:24 paul98 ok cool :)
10:25 partner and if you want to increase the size of your brick you can either extend the lvm or add another brick
10:25 paul98 ok makes sense
10:25 paul98 i'll make the lvm to 1tb then and make a new brick based on that
10:25 paul98 so could i have more then one brick (lvm parition) assigned to a gluster volume?
10:27 paul98 so just to clear things up - i currently have this, /dev/sdj1             5.4T  1.1G  5.1T   1% /data
10:27 paul98 whichi i wanna spilt into 1tb and then 4tb
10:31 partner yeah, you can have lots of bricks, they are your volume building blocks and can be added on the fly
10:31 nthomas joined #gluster
10:34 paul98 all this parition stuff confuses me1
11:00 level7 joined #gluster
11:01 haomaiwang joined #gluster
11:03 hi11111 joined #gluster
11:03 harish_ joined #gluster
11:04 partner i can explain further but as long as you have anything mountable at hand you can turn it into brick
11:04 partner be it a physical harddisk, a RAID or a logical volume (LVM)
11:05 partner first two are limited by the actual physical size obviously but with LVM you can create logical volume of any size you want
11:06 paul98 ah ok makes sense
11:06 partner say, you have RAID-10 of 10 pieces of 2TB drives. that gives you 10TB space from the raid
11:06 paul98 all bit new to me
11:06 paul98 i can have 2 lvm's at 5tb
11:06 partner now, you can use LVM to only create 1TB logical volume out of that and create filesystem there and give it for glusterfs to be used as a brick
11:07 partner yeah if you want 1TB and 4TB
11:07 gem joined #gluster
11:09 partner and when you create volume you assign bricks to it and the size of your volume will be the size of your brick(s): http://deliveryimages.acm.org/10.1145/2560000/2555790/11549f1.png
11:09 paul98 ah ok
11:10 nehar_ joined #gluster
11:10 partner also the type of the volume affects the usable size
11:11 partner https://www.gluster.org/community/documentation/images/e/eb/Distribute-volume.png
11:11 partner https://www.gluster.org/community/documentation/images/8/85/Replicate-volume.png
11:11 partner but if you only have one brick you don't have to worry about that stuff, you get the size of your LVM..
11:11 paul98 yer i want to have two volumes
11:12 paul98 one for windows and one for linux
11:12 partner not sure if i'm clear here, been away from the community for a year and i'm in a rust a bit
11:12 paul98 nope makes sense partner
11:13 partner i mean, of course there is the possibility to partition that 5TB disk with fdisk or whatever tool but that is then static and cannot be easily altered
11:13 partner LVM gives the benefit of being able to do resizings somewhat easily but of course that can be dropped away from between if you just partition the old fashioned way
11:14 paul98 yup
11:14 paul98 i did just change the parition to a lvm parition
11:14 paul98 then i can do pvcreate etc and make the lvm's
11:14 paul98 then i can create the bricks etc
11:14 partner yup, just make filesystems there and you're almost done
11:16 paul98 :)
11:16 paul98 all fun and games / learning
11:17 MikeLupe joined #gluster
11:17 partner yup :)
11:17 paul98 i was happy with one big parition but it's not really the right way of doing it :p
11:19 partner there are so many ways of doing things and none are exactly right.. :D
11:20 partner how much bricks, how big bricks, what raids and with what amount of disks,...
11:20 paul98 yup, it's just what you think is best lol
11:20 partner but what glusterfs gives you is the ability to shuffle bricks around
11:21 partner i long ago went into production accidentially with one server incorrectly configured (wrong raid and stuff), i simply asked gluster to move the data out, then recreated the raid and added back to gluster
11:21 partner no downtime involved with the data availability, superb
11:22 partner "war stories" .. :D
11:22 sandersr joined #gluster
11:22 paul98 ha yes! does sound good
11:23 paul98 to be fair we are going to be using it on a basic level
11:23 johnmilton joined #gluster
11:23 paul98 but it's just having it replicated between both data centres which is what is good
11:26 sandersr Hi,
11:26 sandersr I
11:27 partner so you replicate the volume between two datacenters?
11:27 sandersr I'm trying to start a gluster volume which has few snapshots and I've got this error message:
11:27 sandersr volume start: localbackup: failed: Volume id mismatch for brick XXX:/data/glusterfs/localbackup/lv_backup/brick. Expected volume id db37b10d-a0b9-48c1-bdcd-c5c6d06943c4, volume id d4561633-5dc0-494e-987e-1056f55f101f found
11:27 sandersr Any idea how to make it work?
11:28 sandersr /var/lib/glusterd/snaps/ lists 4 backups for localbackup volume. ID d4561633-5dc0-494e-987e-1056f55f101f is not even the newest backup
11:31 arcolife joined #gluster
11:32 julim joined #gluster
11:34 karnan joined #gluster
11:38 ppai joined #gluster
11:39 sandersr I ended up doing this: gluster snapshot restrore localbackup-_GMT-2016.05.22-02.00. and the volume started fine. No idea if that's the best solution in this case
11:59 sobersabre joined #gluster
12:00 Manikandan_ joined #gluster
12:00 sobersabre hi guys. is there a way to have 0 downtime of the service via posix gluster client when one of the bricks goes the **ck down ?
12:01 haomaiwang joined #gluster
12:02 sobersabre hm...
12:03 julim joined #gluster
12:12 Gnomethrower joined #gluster
12:13 karnan joined #gluster
12:19 unclemarc joined #gluster
12:22 bfoster joined #gluster
12:24 jiffin joined #gluster
12:36 atalur joined #gluster
12:42 Apeksha joined #gluster
12:49 julim joined #gluster
13:02 shyam joined #gluster
13:04 dlambrig_ joined #gluster
13:13 Lee1092 joined #gluster
13:15 chirino_m joined #gluster
13:19 rafi joined #gluster
13:20 plarsen joined #gluster
13:21 skylar joined #gluster
13:31 level7_ joined #gluster
13:32 atinm joined #gluster
13:36 rwheeler joined #gluster
13:39 EinstCrazy joined #gluster
13:39 haomaiwang joined #gluster
13:39 EinstCrazy joined #gluster
13:42 haomaiwang joined #gluster
13:49 TvL2386 joined #gluster
13:53 haomaiwa_ joined #gluster
13:59 B21956 joined #gluster
14:01 luizcpg joined #gluster
14:01 haomaiwang joined #gluster
14:07 kovsheni_ joined #gluster
14:12 paul98 partner: ?
14:21 Manikandan joined #gluster
14:22 ira joined #gluster
14:22 hagarth joined #gluster
14:30 shyam joined #gluster
14:36 wushudoin joined #gluster
14:37 wushudoin joined #gluster
14:39 Nebraskka joined #gluster
14:42 ctria joined #gluster
14:51 newdave joined #gluster
14:56 ctria joined #gluster
14:58 shyam left #gluster
15:01 haomaiwang joined #gluster
15:02 shyam joined #gluster
15:11 kpease joined #gluster
15:12 JoeJulian post-factum: yes.
15:13 nthomas joined #gluster
15:13 squizzi joined #gluster
15:14 JoeJulian partner++
15:14 glusterbot JoeJulian: partner's karma is now 10
15:14 Gnomethrower joined #gluster
15:15 JoeJulian sobersabre: That's what it does, yep.
15:15 JoeJulian I should have put an asterisk.
15:16 johnmilton joined #gluster
15:16 JoeJulian sobersabre: The asterisk is, if a replica server goes away without closing its tcp socket, there's no way for the rest of the cluster to know that. There's a ,,(ping-timeout) that applies in that case.
15:16 glusterbot sobersabre: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. With an average MTBF of 45000 hours for a server, even just a replica 2 would result in a 42 second MTTR every 2.6 years, or 6 nines of uptime.
15:19 overclk joined #gluster
15:29 partner pfew, i was worried karma would drop over the time due to inactivity.. :D
15:29 JoeJulian lol
15:29 stormi joined #gluster
15:30 stormi Hi there, anyone fancy some troubleshooting?
15:30 stormi (I know, I should just ask, so that's what I'll do)
15:30 JoeJulian :D
15:31 JoeJulian Ooooh, fancy troubleshooting. Do we get to use large words? ;)
15:31 stormi anything
15:32 stormi I'm debugging an issue on a server on which the glusterd service, or rather glusterfs-server like it's called on that system, to fail to start. Apparently, this is due to /var/lib/glusterd/peers/ to have become empty
15:33 stormi I'll give you the logs, but firstly, do you know what kind of event could cause a peer entry to disappear like that?
15:34 stormi And here are the logs when the server tries to start: https://framabin.org/?ab56c1e18c8774a2#ocTML0dkG7SrmNu/gMsq7LhARzOTW+e4V54ALAJGw4A=
15:34 glusterbot Title: Framabin - Transmettez des messages chiffrés (at framabin.org)
15:35 stormi Re-building the peer entry in the peers/ directory solves the issue, but I'm now looking for the following:
15:35 stormi - find out why it happened (if there's a chance)
15:35 stormi - find out why it doesn't auto-repair, and how to make it auto-repair (server is hosted by the clients themselves)
15:36 stormi That's it
15:37 JoeJulian Ohh, a pastebin en francaise. We are fancy. :)
15:37 JoeJulian Do you know if the rest of /var/lib/glusterd got wiped as well?
15:37 JoeJulian You may have the wrong uuid assigned to this server if that's the case.
15:38 JoeJulian s/may have/may also have/
15:38 glusterbot What JoeJulian meant to say was: You may also have the wrong uuid assigned to this server if that's the case.
15:38 stormi JoeJulian: /var/lib/glusterd/glusterd.info was
15:38 amye joined #gluster
15:39 JoeJulian Yeah, so you'll have to get the uuid from another server, you can find it in its peers directory
15:39 stormi JoeJulian: yes, that's why the person who solved the issue at first did
15:40 stormi If I understand the logs correctly: glusterd thinks it's a new install, but then it fails to start?
15:40 JoeJulian but you're still getting "[2016-03-18 09:10:34.944066] I [glusterd-store.c:1404:glusterd_retrieve_uuid] 0-: No previous uuid is present" in the log.
15:41 stormi JoeJulian: that's the logs of before solving it
15:41 JoeJulian Ok.
15:41 JoeJulian To find out why it's really failing to start try just running glusterd in the foreground, "glusterd --debug"
15:42 JoeJulian (as root of course)
15:42 stormi My current task is to try to make this not happen anymore in the future and detect it if possible and have it auto-repair if I can
15:45 JoeJulian Most everything under /var/lib/glusterd is supposed to be the same on every server with two exceptions: glusterd.info needs to have its own uuid, and peers needs to have all the peers *except* for itself.
15:45 stormi glusterd --debug https://framabin.org/?96752f8f2954d232#4Z2hkADG18Pyj5B7XNwUnnU3moSh/I2tAxbjY3o1aeY=
15:45 glusterbot Title: Framabin - Transmettez des messages chiffrés (at framabin.org)
15:45 johnmilton joined #gluster
15:46 stormi It looks like, from the logs, that my node kind of knows there's a peer out there (10.2.77.2 which is the first installed) but fails to find its uuid (logically)
15:47 stormi So I guess if I detect this situation (since I still don't know how it occurs), I could rebuild the missing peers entries
15:47 stormi I have to go but I'll probably come back tomorrow, thanks for your time
15:48 JoeJulian I'll write up a blog article for this. It's a good topic.
15:49 partner JoeJulian: i was just about to say "i find your lack of blog posts disturbing.." :)
15:49 JoeJulian :)
15:49 partner but perhaps they go to some other place nowadays instead of your own site?
15:50 JoeJulian I just haven't been finding anything worth blogging about.
15:50 JoeJulian Lots of little edge cases that nobody will ever hit.
15:50 partner yeah, well of course if there is nothing to say then its probably best to shut up
15:50 JoeJulian That's always been my way. My wife on the other hand....
15:51 partner i wish i had the power to write, this journey with openstack has been hmph well colorful
15:51 level7 joined #gluster
15:51 partner hehe
15:51 JoeJulian Yeah, and the problem there is that there are a million ways to do it right.
15:52 JoeJulian Documenting what I've learned in openstack could be a whole book.
15:52 partner but could at least write about some certain details and issues and how we resolved them, people do enjoy reading such stuff myself included, google will do the preliminary filtering
15:52 partner but i fail to find the time, even if i would get paid for it...
15:53 partner then again, its 7PM and i'm once again here instead of doing something at home.. fail :)
15:53 JoeJulian My problem has been, over the last 4 or 5 years, that I've been down in the details of things without a good global feel of it all. I think I'm finally there but now I can't remember any specific problems to write about while getting to this point.
15:53 level7_ joined #gluster
15:54 JoeJulian In the job listings we posted, I specifically included blogging and speaking at conferences as part of the job description. :)
15:55 partner i would love to speak too.. i love conferences but too rarely can attend.. i did go to Austin last month but i have a feeling next trip will be 5+ years ahead..
15:59 dgandhi joined #gluster
15:59 dgandhi joined #gluster
15:59 JoeJulian I make it clear that part of my professional development is going to conferences and if I can't develop my skills in that way I'll need to find a company that values that better.
16:00 JoeJulian I've also pointed out that in order to be taken seriously as an industry thought leader, we need to be speaking at conferences. Without that, we're just users.
16:00 dgandhi joined #gluster
16:01 dgandhi joined #gluster
16:01 haomaiwang joined #gluster
16:07 jiffin joined #gluster
16:17 jiffin joined #gluster
16:19 harish joined #gluster
16:23 techsenshi joined #gluster
16:33 MrAbaddon joined #gluster
16:41 squizzi_ joined #gluster
16:46 shyam joined #gluster
16:51 partner well put
16:56 luizcpg_ joined #gluster
17:01 haomaiwang joined #gluster
17:03 julim_ joined #gluster
17:15 kpease joined #gluster
17:15 Siavash___ joined #gluster
17:21 kpease joined #gluster
17:30 amye joined #gluster
17:43 jri joined #gluster
17:46 jiffin joined #gluster
17:59 luizcpg joined #gluster
18:01 haomaiwang joined #gluster
18:03 squizzi_ joined #gluster
18:08 ashka joined #gluster
18:09 Wizek joined #gluster
18:12 rwheeler joined #gluster
18:12 luizcpg_ joined #gluster
18:15 amye joined #gluster
18:16 jiffin joined #gluster
18:22 squizzi joined #gluster
18:23 squizzi joined #gluster
18:28 Wizek joined #gluster
18:41 ahino joined #gluster
18:41 gem joined #gluster
18:43 deniszh joined #gluster
18:48 squizzi_ joined #gluster
19:01 haomaiwang joined #gluster
19:22 Siavash___ joined #gluster
19:23 luizcpg joined #gluster
19:38 deniszh joined #gluster
19:43 Wizek joined #gluster
19:49 jlp1 joined #gluster
19:56 amye joined #gluster
20:01 haomaiwang joined #gluster
20:11 DV_ joined #gluster
20:16 kovshenin joined #gluster
20:20 deniszh joined #gluster
20:25 Siavash___ joined #gluster
21:01 haomaiwang joined #gluster
21:13 luizcpg joined #gluster
21:14 johnmilton joined #gluster
21:23 skylar joined #gluster
21:35 hagarth joined #gluster
21:35 kpease joined #gluster
21:52 sankarshan_away joined #gluster
21:56 shyam joined #gluster
22:01 haomaiwang joined #gluster
23:01 haomaiwang joined #gluster
23:17 luizcpg_ joined #gluster
23:27 mowntan joined #gluster
23:33 julim joined #gluster
23:50 luizcpg_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary