Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-07-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 B21956 joined #gluster
00:52 shdeng joined #gluster
01:08 armyriad joined #gluster
01:42 daMaestro joined #gluster
01:46 JesperA joined #gluster
01:59 nishanth joined #gluster
02:15 PaulCuzner joined #gluster
02:15 harish joined #gluster
02:36 Javezim Did I hear that 3.8 was coming out soon?
02:43 armyriad joined #gluster
02:44 kdhananjay joined #gluster
02:50 hagarth1 Javezim: already out
02:51 rafi joined #gluster
02:51 d4n13L for almost a month, yeah ;)
02:53 Javezim Ah I'll have to look into it, do you know did they make any improvements for finding and fixing File Split brains? Seems to be my biggest PITA at the moment.
02:58 shdeng joined #gluster
03:20 magrawal joined #gluster
03:22 julim joined #gluster
03:39 nbalacha joined #gluster
03:49 itisravi joined #gluster
03:50 RameshN joined #gluster
03:59 auzty joined #gluster
04:15 atinm joined #gluster
04:21 masuberu joined #gluster
04:23 atinmu joined #gluster
04:25 MusiciAtin joined #gluster
04:29 poornimag joined #gluster
04:30 sanoj joined #gluster
04:34 hgowtham joined #gluster
04:39 ppai joined #gluster
04:40 shubhendu joined #gluster
04:46 ramky joined #gluster
05:02 prasanth joined #gluster
05:02 Manikandan joined #gluster
05:06 ndarshan joined #gluster
05:11 nehar joined #gluster
05:14 shubhendu joined #gluster
05:15 sakshi joined #gluster
05:20 Bhaskarakiran joined #gluster
05:30 jiffin joined #gluster
05:36 gowtham joined #gluster
05:41 Muthu joined #gluster
05:42 nishanth joined #gluster
05:42 Saravanakmr joined #gluster
05:52 hchiramm joined #gluster
05:57 karnan joined #gluster
05:57 devyani7_ joined #gluster
05:57 siel joined #gluster
06:02 misak joined #gluster
06:02 misak Hi there
06:04 prasanth joined #gluster
06:04 nehar joined #gluster
06:05 karthik___ joined #gluster
06:06 misak I'm faced problem with GlusterFS and need help. I'm trying to create HA cluster with Glusterfs on oVirt, following this guide: https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/
06:06 misak on clean CentOS 7 install, but getting this error:
06:07 misak [2016-07-09 12:29:17.218908] E [MSGID: 106062] [glusterd-volume-ops.c:2328:glusterd_op_create_volume] 0-management: brick2.mount_dir not present
06:07 misak [2016-07-09 12:29:17.218986] E [MSGID: 106123] [glusterd-syncop.c:1411:gd_commit_op_phase] 0-management: Commit of operation 'Volume Create' failed on localhost
06:07 misak any help appreciated
06:07 Manikandan joined #gluster
06:09 siel joined #gluster
06:17 kshlm joined #gluster
06:19 aspandey joined #gluster
06:19 R0ok_ joined #gluster
06:21 jiffin misak: i am not sure whether you are facing a simialr problem https://botbot.me/freenode/gluster/2016-06-20/
06:22 aravindavk joined #gluster
06:23 Bhaskarakiran joined #gluster
06:24 Pupeno joined #gluster
06:24 Pupeno joined #gluster
06:27 foster joined #gluster
06:28 jwd joined #gluster
06:29 satya4ever joined #gluster
06:33 jtux joined #gluster
06:35 rafi1 joined #gluster
06:36 rafi joined #gluster
06:37 misak jiffin: Thanks, will try now
06:38 nehar joined #gluster
06:39 karnan_ joined #gluster
06:45 skoduri joined #gluster
06:45 itisravi joined #gluster
06:46 rafi1 joined #gluster
06:51 RameshN joined #gluster
06:51 ppai joined #gluster
06:53 hchiramm_ joined #gluster
06:58 anil joined #gluster
06:58 msvbhat joined #gluster
06:58 Manikandan joined #gluster
06:58 Pupeno joined #gluster
07:09 jri joined #gluster
07:15 hgowtham_ joined #gluster
07:20 hackman joined #gluster
07:20 Pupeno joined #gluster
07:21 mbukatov joined #gluster
07:25 aravindavk joined #gluster
07:29 ivan_rossi joined #gluster
07:33 rafi joined #gluster
07:36 aspandey joined #gluster
07:37 mbukatov joined #gluster
07:37 fsimonce joined #gluster
07:40 aravindavk joined #gluster
07:44 arcolife joined #gluster
07:46 armyriad joined #gluster
07:47 GreatSnoopy joined #gluster
07:52 level7_ joined #gluster
07:52 Manikandan joined #gluster
07:54 hchiramm__ joined #gluster
07:55 MikeLupe joined #gluster
07:55 RameshN joined #gluster
07:57 deniszh joined #gluster
07:58 Bhaskarakiran joined #gluster
08:00 skoduri_ joined #gluster
08:02 Ulrar Is there a guide for upgrading glusterfs to another minor version ? I'd like to upgrade my 3.7.12 to 3.7.13
08:02 Ulrar Always reinstalled before, never upgraded yet
08:04 Acinonyx joined #gluster
08:04 wnlx joined #gluster
08:05 post-factum Ulrar: upgrade server 1, restart gluster, wait for heal, upgrade server 2, restart gluster, wait for heal... then upgrade clients and remount
08:06 post-factum Ulrar: just consider pkilling (SIGTERM) glusterfsd processes on each server as well before restarting glusterd
08:06 Ulrar The clients are the same as the server
08:07 Ulrar I need to upgrade to opcode or something too no ?
08:07 post-factum Ulrar: after successful upgarde of all servers and clients one may bump opversion
08:07 Ulrar Yeah opversion, that's the one
08:07 Ulrar Allright then thanks, I'll try that
08:07 Ulrar I don't need to power down the VMs then ?
08:14 itisravi_ joined #gluster
08:17 Ulrar Bah, I'll just reboot the servers, that way I'll be sure
08:21 Seth_Karlo joined #gluster
08:21 Pupeno joined #gluster
08:24 Pupeno joined #gluster
08:29 hybrid512 joined #gluster
08:31 rwheeler joined #gluster
08:33 itisravi_ joined #gluster
08:34 aspandey joined #gluster
08:38 Klas planning on building our own packages for gluster (client and server), we are currently considering 3.7.11 since that seems to be the most stable version atm, does that seem right?
08:44 atalur joined #gluster
08:45 post-factum Klas: our experience says one should consider additional patching to fix some annoying bugs as not everything is merged yet
08:46 Klas oh
08:46 Ulrar Klas: If you use sharding, 3.7.11 really isn't the most stable version
08:46 Ulrar But if you don't sure
08:47 Klas I'm open to any version really, I just want something newer than the on in xenial since that one has got several annoying interface bugs, basically
08:48 Klas (noticeably, seeing which on is the arbiter is annoying)
08:48 Ulrar 3.7.12 works fine if you don't need libgfapi, and 3.7.13 seems to just correct that so should be the best one
08:48 Ulrar i'm installing it now to test
08:48 Klas yeah, we are still in "lab" level
08:49 Ulrar I've got 3.7.12 in production using NFS, works a lot better than 3.7.11 for us
08:49 hgowtham__ joined #gluster
08:49 Klas ok
08:49 Bhaskarakiran joined #gluster
08:49 Ulrar But that's just because we use sharding
08:50 Ulrar post-factum: Do you know what the opversion of 3.7.13 is, or where I can find it ?
08:50 Klas I'm trying to understand what sharding is currently =)
08:51 post-factum Ulrar: i believe it wasn't updated and is still 30712
08:51 rastar joined #gluster
08:51 Ulrar Klas: Basically it splits the big files in smaller ones, that way when you need to heal a volume it takes a lot less time since only the shards that were modified needs healing. And only those gets locked, so for VM storage that means a lot less IOwaits
08:51 Ulrar post-factum: Ha ! thanks
08:52 Ulrar Guess it didn't need to change if it's just a fix to the lib
08:52 Klas Ulrar: ah, won't be necessary for us, we will only store relatively small files
08:53 post-factum Ulrar: you could always check libglusterfs/src/globals.h for latest opversion available
08:54 hchiramm_ joined #gluster
08:58 Klas Ulrar: btw, why NFS?
08:58 Klas we are only testing the FUSE client since we want HA
09:00 Ulrar Klas: Because with 3.7.12 the libgfapi doesn't work, that's why they just released 3.7.13 so quickly
09:01 Ulrar It's easier to add nfs in proxmox than the fuse client, so I went with that. And since my gluster are installed on the proxmox, I just use localhost as the server anyway
09:02 Ulrar So no real concern about HA, if localhost isn't responding the server is having bigger problem than accessing the gluster anyway :)
09:06 Slashman joined #gluster
09:08 Manikandan joined #gluster
09:09 derjohn_mobi joined #gluster
09:09 derjohn_mobi Hello, currently I find files with sucher permissions in my gluster 3.7: "d????????? ? ?        ?           ?            ? .ssh". What could be the cause for this?
09:10 derjohn_mobi Those file are probably opened on multiple servers, mounted viy fuse.gluster
09:10 nbalacha derjohn_mobi, looks like a lookup failure. Do you see any errors in the client logs ?
09:12 derjohn_mobi nbalacha, yes, I see many "I" entries an one "E"
09:12 nbalacha derjohn_mobi, what does the "E" say?
09:12 derjohn_mobi "E [MSGID: 114058] [client-handshake.c:1524:client_query_portmap_cbk] 0-filetrans-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
09:12 derjohn_mobi "
09:12 derjohn_mobi and E [socket.c:2395:socket_connect_finish] 0-filetrans-client-0: connection to 10.31.39.51:24007 failed (Connection timed out)
09:13 derjohn_mobi It's a two node cluster with replicated volume. (No quorum tie breaker ....?)
09:14 derjohn_mobi But anyway, I would have expected that the filesystem stays "in shape" after a connection loss
09:15 nbalacha derjohn_mobi, are all files affected?
09:16 nbalacha derjohn_mobi, what is your vol config?
09:18 derjohn_mobi nbalacha, No, only some file are affected. Especially the .ssh Dir which is probably always open on two different machines
09:19 derjohn_mobi Type: Replicate, Number of Bricks: 1 x 2 = 2
09:19 derjohn_mobi Options Reconfigured:
09:19 derjohn_mobi performance.readdir-ahead: o
09:27 kotreshhr joined #gluster
09:29 nehar joined #gluster
09:32 msvbhat joined #gluster
09:38 Manikandan joined #gluster
09:39 Jules-2 joined #gluster
09:45 nbalacha joined #gluster
09:46 atinm joined #gluster
09:50 nishanth joined #gluster
09:53 karthik___ joined #gluster
09:54 hchiramm__ joined #gluster
09:59 karthik_ joined #gluster
10:04 RameshN joined #gluster
10:10 derjohn_mob joined #gluster
10:11 level7 joined #gluster
10:19 harish_ joined #gluster
10:24 kdhananjay1 joined #gluster
10:24 aspandey joined #gluster
10:27 hchiramm__ joined #gluster
10:30 Acinonyx joined #gluster
10:32 Klas Ulrar: ah, proxmox, yeah, that changes things
10:32 aspandey joined #gluster
10:33 kdhananjay joined #gluster
10:36 Manikandan joined #gluster
10:41 jwd joined #gluster
10:43 level7_ joined #gluster
10:46 Saravanakmr joined #gluster
10:49 johnmilton joined #gluster
10:50 level7 joined #gluster
10:52 Acinonyx joined #gluster
10:54 nishanth joined #gluster
10:58 itisravi joined #gluster
10:59 Manikandan joined #gluster
11:05 sbulage joined #gluster
11:06 atinm joined #gluster
11:11 ashiq joined #gluster
11:20 rastar joined #gluster
11:21 ivan_rossi left #gluster
11:22 level7 joined #gluster
11:23 Manikandan joined #gluster
11:24 B21956 joined #gluster
11:25 Manikandan joined #gluster
11:27 hchiramm_ joined #gluster
11:37 rafaels joined #gluster
11:43 hgowtham_ joined #gluster
11:47 kotreshhr left #gluster
11:48 ppai joined #gluster
11:48 Muthu joined #gluster
11:49 kotreshhr joined #gluster
11:49 Manikandan joined #gluster
11:53 ira joined #gluster
12:00 titansmc joined #gluster
12:03 titansmc hi guys, I just upgrades my SLES 12 SP 0 to SLES 12 SP1, along with the glusterfs, from 3.7 to 3.8. First I did upgraded the OS and then I have  modified the repo to point to the new version and upgraded it. Is that ok? because when I try to run any command I only get:
12:03 titansmc server:~ # gluster peer status
12:03 titansmc gluster: symbol lookup error: gluster: undefined symbol: use_spinlocks
12:11 titansmc sorry guys, I found what it was....I have upgraded glusterfs but not its dependencies
12:14 harish_ joined #gluster
12:25 fredk- joined #gluster
12:27 robb_nl joined #gluster
12:27 ashka hi, I plan to upgrade from 3.5 to 3.7 because of a blocking bug, should I upgrade all my brick nodes to 3.6 before 3.7 or it is unnecessary ?
12:28 hchiramm__ joined #gluster
12:29 wadeholler joined #gluster
12:34 post-factum ashka: i'd to to latest 3.5, then to latest 3.6, and then to latest 3.7
12:34 post-factum *i'd go
12:34 ashka post-factum: okay, thanks
12:35 fredk- Hi Guys, we've been doing some experimentation using gluster for small file workloads (web content data). And, we've been completely unsuccessful in reaching decent performance for small file writes. Very briefly we've run 3 x nodes with a replica of 3. The 3 nodes each have 8 x 960GB Samsung PM863's (each set up as one brick straight from sata port), 64 gigs of ram and dual e5-2630's. In terms of write performance we've seen upwards to 300 fil
12:35 fredk- e writes pr second, which seems very low considering. Is this what we could expect or is there something horribly wrong here? :)
12:37 fredk- The hardware should be capable of I guess upwards to 100K writes in theory, ofcourse that'll never happen. But it'd be nice not to see 300 :)
12:39 paraenggu joined #gluster
12:39 fredk- I forgot to mention, it's a 10 gig network and using fuse client.
12:40 paraenggu Hi there, does anyone know what the current state of user/group quota support within GlusterFS is?
12:40 plarsen joined #gluster
12:41 post-factum fredk-: do you really expect to write that much?
12:42 fredk- post-factum: Good question, I do expect to write perhaps upwards to 10k file writes burst.
12:42 fredk- primary usecase would be hosting data and mail directories
12:43 jiffin paraenggu: Rightnow it is targeted for 3.9 I guess, https://www.gluster.org/community/roadmap/3.9/
12:43 glusterbot Title: GlusterFS 3.9 Planning Gluster (at www.gluster.org)
12:43 post-factum fredk-: probably, the first step would be using replica 3 arbiter 1 instead of full replica
12:43 jiffin paraenggu: you can check with Manikandan about the current status of that feature
12:44 paraenggu jiffin: Thanks!
12:44 jiffin he is not online rightnow
12:44 post-factum fredk-: but i doubt that high iopses could be achieved with gluster fuse
12:44 paraenggu jiffin: Ok, good to know
12:45 fredk- @post-factum it could be, but as this is metadata intensive i'm not 100% sure it'll do enough.
12:45 fredk- @post-factum would using nfs give us better numbers?
12:46 fredk- ofcourse we will test this, however I was a little put off by the somewhat disappointing numbers :)
12:46 post-factum fredk-: yes, you would try to use nfs-ganesha (with, probably, some HA setup) first
12:47 fredk- @post-factum that said read numbers were "amazing" as in, for anything but the smallest workloads we maxed our network which is really about what you can expect :)
12:47 post-factum fredk-: it is all about network latency
12:48 fredk- sure, ofcourse
12:48 fredk- @post-factum Do you feel my numbers sounds like they are possible to reach using  reachable
12:48 fredk- hold that thought finishing that sentence :)
12:49 post-factum anyway, i doubt
12:49 fredk- @post-factum *possible to reach using nfs I meant to say.
12:49 post-factum ah
12:49 post-factum :)
12:49 post-factum no :D
12:49 post-factum test it first
12:49 fredk- ofcourse
12:50 fredk- I guess i'm just poking around to get a sense of if its worth the time at all (considering the usecase)
12:51 post-factum fredk-: if you disable nfs strict locking and enable attribute caching, losing some consistency across multiple clients, then something could be achieved
12:51 TvL2386 joined #gluster
12:51 post-factum fredk-: but you should get some numbers for your setup first to evaluate it
12:52 fredk- mm, there is that, however this is a web cluster and things do need to be consistent
12:52 fredk- @post-factum yeah, it could also be that this just aint the right usecase for gluster (yet)
12:53 fredk- @post-factum are you aware of others using it for this kind of workload, atleast semi successfully? :)
12:53 post-factum well
12:53 post-factum we use it for web, but that is read-most workload only
12:53 fredk- webhosting data?
12:54 post-factum also we tried to use fuse client for mail storage, but now stuck to nfs because of memory consumption
12:54 post-factum correct, web-hosting data
12:54 fredk- @post-factum thats exactly what we are going to be using it for as well, does it perform acceptably for you?
12:55 post-factum more or less taking into account that i have to deal with gluster devs almost every workday to figure out what went wrong this time :)
12:55 post-factum but for real we do not have another option. and nobody has
12:56 post-factum except of cephfs in some distant future
12:56 post-factum (everything i talk about is related to posix storage)
12:56 fredk- hehe, cephfs is a bit like "we are fairly sure-ish that it won't eat your data)
12:57 fredk- and yes, posix storage ofcourse.
12:57 post-factum if you have a possibility to integrate frontend with some S3-like backend, do that and stick to openstack swift
12:57 fredk- I don't, it's traditional webhosting data
12:57 post-factum if you need posix, you have not so many options
12:57 fredk- @post-factum there is Compuverde, swedish company
12:58 fredk- pure is coming out with an all flash posix solution, I guess you have netapp
12:58 fredk- but these two (pure/netapp) will cost you a metric bucketload
12:59 post-factum we have mixed ceph/gluster stack here, all opensource
13:00 Gnomethrower joined #gluster
13:01 fredk- @post-factum I don't mind paying as such, but open source has come a long way and some of the solutions out there is literally as good as anything available, even commercially.
13:01 fredk- but okay, I guess we'll poke around some more
13:02 fredk- thank you
13:07 chirino joined #gluster
13:08 ira joined #gluster
13:13 julim joined #gluster
13:18 atinm joined #gluster
13:21 atinmu joined #gluster
13:28 hchiramm_ joined #gluster
13:36 post-factum np
13:41 Gnomethrower joined #gluster
13:41 wadeholler joined #gluster
13:49 shyam joined #gluster
13:53 nbalacha joined #gluster
14:04 MikeLupe joined #gluster
14:05 Jules-2 what is the best way to upgrade glusterfs within 3.7 release without a downtime?
14:08 arcolife joined #gluster
14:14 bowhunter joined #gluster
14:20 dnunez joined #gluster
14:24 skylar joined #gluster
14:25 jiffin joined #gluster
14:29 hchiramm__ joined #gluster
14:29 jesk joined #gluster
14:29 jesk hi
14:29 glusterbot jesk: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:30 jesk I know the internet is full of answers regarding "/data/path is already part of a volume"
14:30 jesk but how can this affect me who has never used gluster before?
14:31 ndevos jesk: I think you can run into it when you execute the "gluster volume create" multiple times, in case a previous one failed
14:33 jesk oh ok, then this is the reason
14:33 jesk maybe someone should fix this then (leaving dirt behind without beeing succesful considered bad)
14:34 jesk thx dood
14:37 squizzi joined #gluster
14:40 aravindavk joined #gluster
14:48 nehar joined #gluster
14:52 robb_nl joined #gluster
14:52 jesk gluster gives such an elegant impression, incredible
14:52 jesk I tried gfs2, pain, pure pain, especially with the pacemaker bloat
14:53 jesk i tried ceph, seems awesome but such a complicated beat
14:53 jesk beast*
14:53 jesk gluster instead looks rly lightweight and much easier
14:53 post-factum ndevos: another naїve one here ^^ :)
14:54 jesk haha
14:55 jesk ah yeah and I tried dbrbd, run that for 6 years
14:55 post-factum one cannot compare completely different solutions
14:55 jesk 6 month back the lvm from the primary node totally fucked up, drbd just switched to the secondary node serving primary node from there
14:56 jesk also rock solid
14:56 jesk true
14:56 rafi joined #gluster
14:56 jbrooks joined #gluster
14:57 jesk but most of those solutions offer a way to solve same or similar  problems
14:57 jesk just with different architectures you need to build around
14:58 jesk lets see how a split brain will be handled
14:59 arcolife joined #gluster
15:01 level7 joined #gluster
15:06 wushudoin joined #gluster
15:07 level7_ joined #gluster
15:11 shyam joined #gluster
15:12 robb_nl joined #gluster
15:14 owlbot joined #gluster
15:15 jwd joined #gluster
15:16 guhcampos joined #gluster
15:18 jesk oh wow, there is a can full of daemons running
15:18 jesk muha
15:19 jwd joined #gluster
15:25 satya4ever joined #gluster
15:26 JesperA- joined #gluster
15:37 kpease joined #gluster
15:38 ashiq joined #gluster
15:41 muneerse joined #gluster
15:49 muneerse2 joined #gluster
15:49 level7 joined #gluster
15:55 level7_ joined #gluster
16:05 rafi joined #gluster
16:08 shubhendu joined #gluster
16:12 devyani7_ joined #gluster
16:25 F2Knight joined #gluster
16:27 JesperA joined #gluster
16:28 paraenggu joined #gluster
16:30 guhcampos joined #gluster
16:30 jiffin joined #gluster
16:49 devyani7 joined #gluster
17:23 ira joined #gluster
17:25 karnan joined #gluster
17:27 paraenggu left #gluster
17:29 muneerse2 joined #gluster
17:31 jri joined #gluster
17:32 skoduri joined #gluster
17:54 F2Knight_ joined #gluster
17:54 derjohn_mob joined #gluster
17:57 bowhunter joined #gluster
18:03 ahino joined #gluster
18:23 dnunez joined #gluster
18:36 hackman joined #gluster
18:51 BitByteNybble110 joined #gluster
19:10 ira joined #gluster
19:11 deniszh joined #gluster
19:25 robb_nl joined #gluster
19:25 Wizek joined #gluster
19:58 pocketprotector joined #gluster
20:09 squizzi joined #gluster
20:24 guhcampos joined #gluster
20:24 sadbox joined #gluster
20:31 sadbox joined #gluster
20:32 julim joined #gluster
20:35 Telsin joined #gluster
20:40 deniszh joined #gluster
20:49 pampan joined #gluster
21:01 robb_nl joined #gluster
21:22 Shirwah joined #gluster
21:53 jbrooks joined #gluster
22:42 B21956 joined #gluster
22:55 wnlx joined #gluster
23:05 johnmilton joined #gluster
23:15 ira joined #gluster
23:17 wnlx joined #gluster
23:21 johnmilton joined #gluster
23:26 Telsin joined #gluster
23:52 dnunez joined #gluster
23:54 julim joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary