Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-07-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:31 hchiramm joined #gluster
00:35 caitnop joined #gluster
00:41 ghollies joined #gluster
00:59 shdeng joined #gluster
01:11 javi404 joined #gluster
01:21 PaulCuzner joined #gluster
01:22 wadeholler joined #gluster
01:37 Lee1092 joined #gluster
01:46 hagarth joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:18 harish_ joined #gluster
02:36 JesperA joined #gluster
02:47 kramdoss_ joined #gluster
02:50 chirino_m joined #gluster
02:53 Gnomethrower joined #gluster
02:55 masuberu joined #gluster
03:18 magrawal joined #gluster
03:42 PaulCuzner left #gluster
03:44 PaulCuzner joined #gluster
03:49 nishanth joined #gluster
03:55 muneerse joined #gluster
03:56 RameshN joined #gluster
04:02 atinm joined #gluster
04:03 shubhendu joined #gluster
04:04 poornimag joined #gluster
04:11 itisravi joined #gluster
04:21 sanoj joined #gluster
04:24 nehar joined #gluster
04:24 gem joined #gluster
05:04 rafi joined #gluster
05:06 ppai joined #gluster
05:06 jiffin joined #gluster
05:12 ashiq joined #gluster
05:13 armyriad joined #gluster
05:14 Bhaskarakiran joined #gluster
05:17 mamb joined #gluster
05:19 ndarshan joined #gluster
05:22 prasanth joined #gluster
05:23 kotreshhr joined #gluster
05:23 satya4ever joined #gluster
05:25 aravindavk joined #gluster
05:28 aspandey joined #gluster
05:37 sakshi joined #gluster
05:37 Apeksha joined #gluster
05:42 hchiramm joined #gluster
05:43 derjohn_mob joined #gluster
05:43 Muthu_ joined #gluster
05:48 Bhaskarakiran_ joined #gluster
05:51 Manikandan joined #gluster
05:51 karnan joined #gluster
05:54 Manikandan joined #gluster
05:54 hgowtham joined #gluster
05:56 devyani7_ joined #gluster
05:59 karthik_ joined #gluster
06:04 ramky joined #gluster
06:11 mhulsman joined #gluster
06:12 md2k joined #gluster
06:14 msvbhat joined #gluster
06:15 itisravi joined #gluster
06:25 pur_ joined #gluster
06:32 derjohn_mob joined #gluster
06:36 Saravanakmr joined #gluster
06:41 mhulsman1 joined #gluster
06:43 kdhananjay joined #gluster
06:48 devyani7_ joined #gluster
06:58 anil joined #gluster
07:02 PaulCuzner joined #gluster
07:11 mchangir joined #gluster
07:17 ic0n joined #gluster
07:20 Gnomethrower joined #gluster
07:23 derjohn_mob joined #gluster
07:26 kovshenin joined #gluster
07:27 Bhaskarakiran joined #gluster
07:31 hackman joined #gluster
07:31 knightsamar joined #gluster
07:37 Seth_Karlo joined #gluster
07:42 MikeLupe joined #gluster
07:52 Slashman joined #gluster
07:56 armin joined #gluster
07:57 jwd joined #gluster
08:00 prasanth joined #gluster
08:02 pdrakewe_ joined #gluster
08:06 armyriad joined #gluster
08:07 PaulCuzner joined #gluster
08:14 ahino joined #gluster
08:20 gem joined #gluster
08:28 Wizek_ joined #gluster
08:36 jesk I just installed gluster on one of my clients
08:36 jesk my server uses
08:37 jesk glusterfs.x86_64                       3.8.0-2.el7                     @centos-gluster38
08:37 jesk and the client
08:37 jesk glusterfs.x86_64                     3.8.1-1.el7                    @centos-gluster38
08:37 jesk the filesystem hangs when doing anything on it from the client
08:38 jesk and the log says
08:38 jesk [2016-07-19 06:14:48.234634] I [MSGID: 114047] [client-handshake.c:1233:client_setvolume_cbk] 0-gv1-client-0: Server and Client lk-version numbers are not same, reopening the fds
08:38 glusterbot jesk: This is normal behavior and can safely be ignored.
08:39 armyriad joined #gluster
08:39 Klas haha, nice bot action
08:40 Klas jesk: what do you mean with filesystem hangs?
08:40 Klas do you mean that the volume mount crashes?
08:40 Klas and just for the client or for all clients?
08:41 Klas (I won't be able to help, just trying to aid the process of finding the error)
08:41 jesk even a  "ls" just hangs
08:41 Klas ah, you mean that the mount is unresponsive
08:41 Klas how many files are there?
08:41 jesk just for this client, the client on the server itself not
08:41 Klas and how long have you waited?
08:41 jesk i wait for 5 minutes now
08:42 Klas can you cancel the ls?
08:42 jesk until I disconnect with ssh escape
08:42 Klas ctrl+c I mean
08:42 harish_ joined #gluster
08:42 jesk there are 5 directories
08:42 jesk ctrl+c doesnt help as Iam remotely connected, I need to ssh escape
08:43 Klas effectively stale mount then
08:43 jesk hmmm
08:43 jesk I have an idea
08:43 jesk maybe its selinux?
08:44 Klas haha
08:44 Klas ALWAYS BLAME SELINUX!
08:44 Klas ;)
08:44 Klas it's always a nice thing to temporarily turn it off when uncertain why something doesn't work at least
08:46 jesk yeah, sorry for that.
08:46 jesk i just turned it on because of some java RMI shit, which may had the needt to use it
08:47 jesk will try now with disabled selinux
08:47 Klas Ah, I meant no disrespect
08:47 Klas I actually loathe guides which say "turn off SELinux"
08:48 Klas I don't use it
08:48 Klas but it has got uses
08:48 Klas and the right thing to say is "see that SELinux doesn't block" and let the user decide =P
08:49 Klas turning it off for lab purposes, to see if it's the culprit, is very legitimate
08:54 jesk turned off it works :)
08:54 jesk damn
08:54 jesk sorry
08:54 jesk i hate selinux to be honest, its too complex to manage it, except maybe for companies who have whole teams just investigating and managing SElinux
08:56 Klas hehe
08:56 jesk i just turned it to "permissive"
08:56 jesk ls works
08:56 Klas I've never really worked with red hat
08:56 jesk but write hangs again
08:56 jesk again rebooting
08:56 Klas always debian or ubuntu
08:56 jesk ma also
08:57 jesk coming from FreeBSD -> Debian -> Ubuntu -> OSX -> CentOS
08:57 jesk but my main role is Network Design :)
08:57 Klas why OSX?
08:57 jesk just for my laptop
08:57 Klas ah
08:57 jesk because I hadnt the time to care about my workstation anymore
08:57 Klas OSX is fine as desktop
08:57 Klas I loathe it
08:57 jesk yes it is
08:58 jesk my mbp is broken currently
08:58 Klas but it works quite well if you can accept the gui
08:58 jesk thats why i use centos also as laptop at the moment
08:58 Klas and accept laptops without trackpoint
08:58 Klas centos laptop?
08:58 Klas *brrr*
08:58 Klas fedora is way preferrable if you want rpm-based
08:58 jesk i'am using it to stay in touch with current server things
08:58 jesk to get used to it
08:59 Klas hehe
08:59 Klas fedora is basically the same
08:59 Klas just with updated and working desktop packages
09:00 jesk I didn't touched linux/unix as workstation since 2009
09:01 jesk I'am quite impressed how smooth things are running these days, even on redhat
09:01 jesk I used from 1999 - 2006 FreeBSD as desktop OS
09:01 jesk that was pain :D
09:02 jesk compiling everything, and freebsd ports were the best back then
09:03 jesk wow
09:03 jesk selinux turned off its still haning
09:05 nishanth joined #gluster
09:06 jesk oh well on the server itself the write itself is also not working
09:06 jesk just the read
09:09 devyani7_ joined #gluster
09:09 fsimonce joined #gluster
09:10 jesk and its directory specific
09:10 jesk interesting
09:11 atalur joined #gluster
09:13 gem joined #gluster
09:15 md2k joined #gluster
09:16 rafaels joined #gluster
09:16 jesk logs dont tell me anything
09:18 jesk any idea how to troubleshoot this?
09:18 [diablo] joined #gluster
09:18 jesk now its definitely *not* SElinux
09:19 jesk its gluster itself
09:19 satya4ever joined #gluster
09:20 ivan_rossi joined #gluster
09:26 msvbhat joined #gluster
09:32 hgowtham joined #gluster
09:34 lanning joined #gluster
09:37 wadeholler joined #gluster
09:38 anil_ joined #gluster
09:39 msvbhat joined #gluster
09:39 jesk I wonder why it doesnt tell me anything about stale stuff
09:50 tg2 joined #gluster
09:50 muneerse2 joined #gluster
09:52 atalur joined #gluster
09:58 Klas split brain?
09:58 Klas also, stale mounts is just my diagnostic, haven't really found out what actually happened
10:00 jesk peer status would show a split-brain, right?
10:01 Klas don't think so
10:01 Klas easiest way to test is to write to a new file
10:01 Klas are you running with quorum?
10:02 jesk creating new files dont dont currently for this volume
10:02 jesk on another volume it does work
10:02 jesk damn latency
10:03 Klas you have stopped/started the volume?
10:03 jesk I have two volumes, one seem to work, the other one not
10:03 jesk yes I did
10:03 Klas and you have unmounted all clients at some point?
10:05 jesk yes
10:05 jesk I just did the whole process again, stop, umounting and so on
10:05 jesk now it does work
10:05 jesk but I bet its just a matter of time again
10:05 jesk will mount it now on the clients as well
10:06 Klas yes, in my lab attempts, I've reached the point where I've simply given up and restarted the entire process again
10:06 Klas especially before I rolled my own packages and tried the default ubuntu xenial version
10:06 jesk yeah
10:07 jesk as soon as this new client connects and tries a write it hangs again
10:07 Klas strange
10:09 jesk its this client
10:10 atalur joined #gluster
10:11 Klas sounds likely
10:11 Klas do you have older client versions in your repo
10:13 jesk no
10:13 Klas to an archive then ;)
10:13 jesk just using the centos repos
10:13 Klas is centos running 3.8 already?
10:14 jesk you have the choice
10:14 jesk 3.6, 3.7 or 3.8
10:15 Klas ah
10:15 Klas sounds reasonable
10:16 Klas need anything from 3.8?
10:16 Klas both 3.6 and 3.7 should be compatible with 3.8
10:17 wadeholler joined #gluster
10:18 jesk ok good to know
10:18 jesk will try the other releases as well
10:19 jesk but doesnt give me a good feeling
10:22 sbulage joined #gluster
10:26 lanning joined #gluster
10:26 Klas I've never worked with clustered file storage and gotten a good feeling
10:27 Klas gluster is probably the least scary I've tried though
10:28 bfoster joined #gluster
10:35 hackman joined #gluster
10:37 devyani7_ joined #gluster
10:39 jesk tried gfs2 which fucked up a lot
10:39 jesk clusters are the last solution one should use imho
10:40 jesk I always avoid any kind of cluster vendor thing in networks
10:40 jesk distributed and autonomous control-plane
10:42 Klas unfortunately, redundancy is almost an absolute requirement
10:43 jesk i saw so many cluster fuckups, clusters reduce uptime in so many cases
10:43 Klas but, yeah, it's always way, way easier to build an almost stable product without redundancy =P
10:43 Klas very true
10:44 cloph too bad tuning it is a black box..
10:44 johnmilton joined #gluster
10:44 jesk networks can be build redundant without clusters, thats why the internet is so stable
10:45 jesk state is evil
10:52 lanning joined #gluster
10:57 Klas hehe, TCP/IP logic on files would be fun
10:57 Klas not very efficient though =P
11:16 robb_nl joined #gluster
11:19 jiffin1 joined #gluster
11:23 Bhaskarakiran joined #gluster
11:28 lanning joined #gluster
11:31 sakshi joined #gluster
11:43 ira joined #gluster
11:44 nishanth joined #gluster
11:44 mhulsman joined #gluster
12:01 jiffin joined #gluster
12:09 g3kk0 joined #gluster
12:10 g3kk0 is it normal for Gluster to re-create files which were deleted whilst one of the cluster members was offline?
12:15 fredk- joined #gluster
12:18 Klas that does sound like a likely behaviour
12:19 Klas unless I'm mistaken, the metadata about a file is basically stored in the file, thus, it's nonexistance shouldn't be stored
12:19 Klas split-brain is mostly guarded against to avoid files being in split-brain...
12:20 Klas btw, VERY far from en expert on the subject
12:25 g3kk0 thanks for the input
12:25 g3kk0 this is a 2 node cluster
12:26 g3kk0 so split brain would come into play
12:26 kdhananjay joined #gluster
12:27 kkeithley it should get deleted during the automatic heal.  It might take a little while. Is healing running?
12:28 g3kk0 not sure, how woudl I check
12:28 g3kk0 not sure, how woudl I check?
12:31 kkeithley `gluster volume status $volname` will show if tghe self-heal daemon is running
12:32 kkeithley more about it at https://gluster.readthedocs.io/en/latest/​Administrator%20Guide/Managing%20Volumes/
12:32 glusterbot Title: Managing Volumes - Gluster Docs (at gluster.readthedocs.io)
12:36 ivan_rossi left #gluster
12:37 g3kk0 thank you
12:37 g3kk0 it does appear to be running
12:38 prasanth joined #gluster
12:41 B21956 joined #gluster
12:44 ghollies joined #gluster
12:48 ira joined #gluster
12:54 rwheeler joined #gluster
12:55 squizzi joined #gluster
12:59 unclemarc joined #gluster
13:01 ahino1 joined #gluster
13:05 kovshenin joined #gluster
13:06 rwheeler joined #gluster
13:08 wadeholler joined #gluster
13:12 msvbhat joined #gluster
13:15 hagarth joined #gluster
13:15 mhulsman joined #gluster
13:16 atinm joined #gluster
13:17 Bhaskarakiran joined #gluster
13:19 ju5t joined #gluster
13:21 ju5t hello, we're seeing attempts to heal a volume every 10 minutes but we can't really see what's wrong. Apparently there was a split brain which we've resolved by removing the file (through a GFS mount) but gluster still thinks it's there. Any ideas what we're missing? The heal process is filling up our logs and slowly hurting performance too.
13:29 ahino joined #gluster
13:29 nishanth joined #gluster
13:33 jiffin joined #gluster
13:38 derjohn_mob joined #gluster
13:39 MessedUpHare joined #gluster
13:47 md2k joined #gluster
13:52 wadeholler joined #gluster
13:57 shaunm joined #gluster
14:02 derjohn_mob joined #gluster
14:08 dnunez joined #gluster
14:13 bowhunter joined #gluster
14:14 kshlm joined #gluster
14:22 gem joined #gluster
14:25 prasanth joined #gluster
14:31 JonathanD joined #gluster
14:34 farhorizon joined #gluster
14:34 scubacuda joined #gluster
14:34 farhoriz_ joined #gluster
14:35 JonathanD joined #gluster
14:35 kaushal_ joined #gluster
14:40 scuttle|afk joined #gluster
14:40 scuttlemonkey joined #gluster
14:41 dnunez joined #gluster
14:49 julim joined #gluster
14:50 wadeholler joined #gluster
14:58 wadeholler joined #gluster
15:01 mchangir joined #gluster
15:01 Bhaskarakiran joined #gluster
15:02 nehar joined #gluster
15:06 wushudoin joined #gluster
15:07 lpabon joined #gluster
15:20 wadeholler joined #gluster
15:24 Saravanakmr joined #gluster
15:34 farhorizon joined #gluster
15:36 mchangir joined #gluster
15:42 skylar joined #gluster
15:45 kshlm joined #gluster
15:47 sanoj joined #gluster
15:57 shyam joined #gluster
16:01 ben453 joined #gluster
16:04 farhorizon joined #gluster
16:47 chirino_m joined #gluster
16:50 jiffin1 joined #gluster
16:55 msvbhat joined #gluster
16:56 mhulsman joined #gluster
17:01 arcolife joined #gluster
17:03 ira joined #gluster
17:08 shyam joined #gluster
17:21 karnan joined #gluster
17:26 bowhunter joined #gluster
17:27 kovsheni_ joined #gluster
17:31 kpease joined #gluster
17:33 jbrooks joined #gluster
17:34 shubhendu joined #gluster
17:44 hchiramm joined #gluster
17:47 Manikandan joined #gluster
17:56 msvbhat joined #gluster
18:07 nishanth joined #gluster
18:20 dnunez joined #gluster
18:26 nishanth joined #gluster
18:29 kpease joined #gluster
18:37 ju5t joined #gluster
18:38 Manikandan_ joined #gluster
18:40 kovshenin joined #gluster
18:42 chirino_m joined #gluster
18:48 ashiq joined #gluster
18:51 robb_nl joined #gluster
18:52 derjohn_mob joined #gluster
18:54 mahdi joined #gluster
19:12 ahino joined #gluster
19:18 PaulCuzner joined #gluster
19:24 jwd joined #gluster
19:35 karnan joined #gluster
19:47 hchiramm joined #gluster
19:53 Roland- joined #gluster
19:54 Roland- hi folks, planning to deploy gluster, 2 nodes, 2 drives each node (bricks)
19:54 Roland- 2 copy, is gluster smart enough to put data on the second node not on the same node different drive?
19:56 jesk volume != node
19:57 Roland- ah crap
19:57 Roland- got it
19:57 Roland- I will do a raid something then
19:59 jesk if you have two nodes and you configure replica 2, it will replicate across two nodes
20:00 jesk your raid-1
20:00 kovsheni_ joined #gluster
20:01 Roland- yeah and since I have two drives per node
20:01 Roland- I am gussing raid0 would be on those two drives
20:01 Roland- so I can have 1 brick per node
20:02 julim joined #gluster
20:11 kpease joined #gluster
20:18 jesk maybe some others can recommend a better approach
20:18 post-factum Roland-: that depends on brick order in create command
20:18 jesk not sure, raid 0 means if one drive fails you have to reinitialize the raid 0 again
20:18 post-factum Roland-: you'd want to specify them like this: host1:brick1 host2:brick1 host1:brick2 host2:brick2
20:20 cloph having only two peers and replicate needs special care re quorum/split brain handling.....
20:21 ghollies joined #gluster
20:23 bwerthmann joined #gluster
20:25 Roland- OK I am planning to have two nodes at beginning and then increase on the way
20:25 Roland- is not a good approach ?
20:25 mahdi joined #gluster
20:25 Roland- kind of like raid 1
20:25 Roland- over network that is
20:30 cloph I'd not start with two - if disk-space is a problem, use a third one as arbiter only..
20:31 cloph but with only two you'll likely have splitbrain whenever there's a network hiccup/whenever you need to reboot one of the peers.
20:31 cloph Of course depends on the usage whether you can avoid that or not...
20:31 jesk I guess a problem really is that if you have two nodes and you have a redundant local brick...
20:31 jesk not sure thu
20:31 JesperA- joined #gluster
20:41 shyam joined #gluster
20:43 Roland- this is an m1000e chassis
20:43 Roland- for another node, the 3rd
20:44 Roland- I have to power on a whole server
20:44 Roland- that was my restrain
21:03 gnulnx Anyone know how to compile gluster on freebsd?  It isn't able to locate URCU, though liburcu is installed.  Also tried to specify the path to URCU_LIBS, and that didn't help either
21:08 gnulnx ah, URCU_LIBS doesn't do wha tI thought it did.
21:13 johnmilton joined #gluster
21:13 Roland- even so, best I can do is two nodes with 4 disks each
21:14 Roland- is that a suitable setup ?
21:14 Roland- with a replica 2, will gluster know to put the second replica on the other node?
21:16 Roland- and not on another disk because if the while server with it's 4 disks goes down...
21:16 squizzi joined #gluster
21:19 johnmilton joined #gluster
21:19 jesk valid concern
21:19 gnulnx Roland-: when you say 4 disks each, do you mean individual disks, or a raid?
21:20 JoeJulian @brick order
21:20 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
21:20 gnulnx bingo
21:21 Roland- I would prefer individual disks
21:21 Roland- ah
21:21 gnulnx So you'll need to partition each one individually, and then follow the brick order above
21:21 Roland- bingo indeed
21:22 jesk nice to know
21:22 jesk couldn't find this in the documentation thu
21:23 Roland- so it would be server1:/data/brick1 server2:/data/brick1 server1:/data/brick2 server2:/data/brick2
21:23 Roland- and so on
21:23 Roland- anything to avoid split brain in this complex looking setup ?
21:24 gnulnx I recommend raiding the disks and then using only the two bricks (one on each server)
21:25 gnulnx You have 4x more room for error by using the individual disks
21:25 Roland- I can even do raid10 to be honest
21:26 Roland- software though or zfs based for checksum bla bla
21:26 gnulnx I use zfs raidz
21:26 Roland- exactly
21:26 Roland- so in this case it would be an 2way repliaca between two nodes
21:26 Roland- that is kind of alot of data wasted
21:27 Roland- I am getting the capacity of one node
21:27 Roland- and already have the raid redundancy
21:27 jesk do you use the share on the server itself?
21:27 Roland- I am planning to use it for backup only
21:28 Roland- tbh
21:28 Roland- but I am constrained to two nodes with 4x4tb each
21:28 Roland- performance is not important, integrity is
21:29 Roland- so maybe a raid0 between two disks and then create that replica
21:29 Roland- still would help, 16TB
21:29 Roland- raid10 will mean 8TB for me and then same 8tb with the two way replica
21:31 Roland- which is ... meh
21:32 Roland- in this case I go as far as raid0 with 4 disks
21:32 Roland- I still have the replica, but only 2 disks fault cap
21:33 Roland- a very very messy setup
21:37 hagarth joined #gluster
21:51 farhoriz_ joined #gluster
21:54 JoeJulian Roland-: imho, raid is used with gluster for performance. Replication for redundancy. If you don't need the performance of striped block devices, then use them jbod as multiple bricks. Personally, I don't like to go below replica 3 which gives me 6 nines of reliability.
22:08 Roland- I see
22:08 Roland- replica3 kind of kills storage capacity
22:08 Roland- I guess
22:08 JoeJulian It's always a balance.
22:09 JoeJulian Speed, capacity, dollars, reliability...
22:10 Roland- in this case I need capacity
22:10 Roland- but, with a little bit of reliability
22:10 Roland- so distributed replicated
22:10 JoeJulian yep
22:10 JoeJulian Or add dollars. ;)
22:10 Roland- I only have 2 nodes, 4 drives each (4tb)
22:11 Roland- no, rack is full I cannot add another single u
22:11 Roland- at least not now
22:11 Roland- so I need to do something with those two nodes
22:11 JoeJulian I know a good DC with more space. ;)
22:12 Roland- yeah me too, but we got the contract for that
22:12 JoeJulian But yeah, it sounds like you have a plan.
22:12 Roland- too much to push over
22:12 Roland- I am trying to get as much as I can from those two servers
22:12 Roland- and gluster seems mature reliable etc
22:12 Roland- so I can do this over the network, and hope some performance
22:15 Roland- at least I have 10g there
22:15 hchiramm joined #gluster
22:15 Roland- but for these spinners
22:15 Roland- 5.4k rpm :))
22:20 farhorizon joined #gluster
22:23 farhoriz_ joined #gluster
22:43 hchiramm joined #gluster
22:52 shyam joined #gluster
23:43 hchiramm joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary