Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-11-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 marlinc joined #gluster
00:31 marin[m] joined #gluster
00:31 smohan[m] joined #gluster
00:38 psony joined #gluster
00:41 map1541 joined #gluster
01:10 shyam joined #gluster
01:24 bluenemo joined #gluster
01:26 int_0x21 joined #gluster
02:04 prasanth joined #gluster
02:05 hmamtora joined #gluster
02:19 cliluw joined #gluster
02:25 baber joined #gluster
02:31 cliluw joined #gluster
02:42 VanDuong joined #gluster
02:54 cliluw joined #gluster
02:56 ilbot3 joined #gluster
02:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:14 blu_ joined #gluster
03:23 dlambrig joined #gluster
03:44 nbalacha joined #gluster
03:44 kramdoss__ joined #gluster
04:02 Saravanakmr joined #gluster
04:11 jap joined #gluster
04:12 itisravi joined #gluster
04:22 atinm joined #gluster
04:28 Shu6h3ndu joined #gluster
04:29 nishanth joined #gluster
04:32 nbalacha joined #gluster
04:34 amye joined #gluster
04:37 gyadav joined #gluster
04:38 side_control joined #gluster
04:50 sanoj joined #gluster
04:50 Saravanakmr joined #gluster
04:51 Shu6h3ndu joined #gluster
04:52 joek_ joined #gluster
04:53 skumar joined #gluster
04:55 Saravanakmr joined #gluster
04:56 poornima_ joined #gluster
05:12 nbalacha joined #gluster
05:19 msvbhat joined #gluster
05:22 apandey joined #gluster
05:25 sunny joined #gluster
05:26 karthik_us joined #gluster
05:27 hgowtham joined #gluster
05:27 ndarshan joined #gluster
05:34 susant joined #gluster
05:39 om2 joined #gluster
05:52 skoduri joined #gluster
05:57 ppai joined #gluster
06:27 xavih joined #gluster
06:39 mbukatov joined #gluster
06:41 xavih joined #gluster
06:46 Prasad joined #gluster
06:48 skoduri_ joined #gluster
06:48 Shu6h3ndu_ joined #gluster
06:56 Prasad joined #gluster
07:04 kotreshhr joined #gluster
07:13 jtux joined #gluster
07:17 jtux joined #gluster
07:22 psony joined #gluster
07:25 int_0x21 Hi Does anyone know anything about this ? [rdma.c:1289:gf_rdma_cm_event_handler] 0-hahs-client-1: cma event RDMA_CM_EVENT_REJECTED, error 8 (me:10.0.30.15:49135 peer:10.0.30.15:24008)
07:31 Prasad joined #gluster
07:32 rafi1 joined #gluster
07:33 Prasad_ joined #gluster
07:36 Prasad__ joined #gluster
07:38 susant joined #gluster
07:39 fsimonce joined #gluster
07:52 om2 joined #gluster
08:02 ivan_rossi joined #gluster
08:05 kdhananjay joined #gluster
08:13 msvbhat joined #gluster
08:16 [diablo] joined #gluster
08:17 major joined #gluster
08:25 rwheeler joined #gluster
08:28 psony joined #gluster
08:32 Prasad joined #gluster
08:40 skoduri_ joined #gluster
08:40 skumar_ joined #gluster
08:51 ThHirsch joined #gluster
08:57 buvanesh_kumar joined #gluster
09:07 _KaszpiR_ joined #gluster
09:09 ahino joined #gluster
09:40 itisravi joined #gluster
09:41 skumar_ joined #gluster
09:47 skumar__ joined #gluster
09:58 sanoj joined #gluster
10:00 [diablo] joined #gluster
10:07 Wizek_ joined #gluster
10:09 msvbhat joined #gluster
10:22 FinalX joined #gluster
10:23 FinalX hi - with geo replication, how can I have multiple slaves for the same master?
10:25 FinalX I'm running 3.12, and when I"m trying to add a second slave, no matter which one, it is telling me there's already a session between the master and the first slave I added with geo-replication create.
10:25 msvbhat joined #gluster
10:34 skumar_ joined #gluster
10:52 kramdoss__ joined #gluster
10:54 aravindavk joined #gluster
11:07 ivan_rossi left #gluster
11:18 kotreshhr left #gluster
11:22 ThHirsch joined #gluster
11:29 rastar joined #gluster
11:42 msvbhat joined #gluster
11:48 FinalX seems that -on the contrary of what the images and doc text says- there's no way of having multiple slave nodes in a geo-replicated gluster set up?
11:49 Klas sounds plausible (don't trust the docs in general, they are hardly ever up to date)
11:56 _KaszpiR_ joined #gluster
11:59 skoduri_ joined #gluster
12:00 msvbhat joined #gluster
12:10 shyam joined #gluster
12:11 skumar joined #gluster
12:23 ThHirsch joined #gluster
12:41 atinm joined #gluster
12:45 rwheeler joined #gluster
12:49 map1541 joined #gluster
12:58 nbalacha joined #gluster
12:59 ahino1 joined #gluster
13:03 _KaszpiR_ joined #gluster
13:06 prasanth joined #gluster
13:12 phlogistonjohn joined #gluster
13:20 dlambrig joined #gluster
13:21 skoduri_ joined #gluster
13:27 plarsen joined #gluster
13:30 giany joined #gluster
13:30 giany left #gluster
13:32 MrAbaddon joined #gluster
13:32 nbalacha joined #gluster
13:35 kettlewell joined #gluster
13:37 kettlewell Anyone have any information on compatibility between gluster 3.8 client side, and 3.5.3 on the server side ?
13:39 kettlewell or just in general for compatibility with the clients being ahead of the server versions...
13:47 Prasad joined #gluster
13:47 aronnax joined #gluster
13:48 ivan_rossi joined #gluster
13:52 skumar_ joined #gluster
14:11 ahino joined #gluster
14:24 stanza joined #gluster
14:25 stanza hi all guys.. i'm trying to compile gluster without RDMA support, but i miserably fail creating rpms. Which is the right option on the configure line?
14:25 susant joined #gluster
14:35 phlogistonjohn joined #gluster
14:38 aravindavk joined #gluster
14:46 hmamtora joined #gluster
14:47 ndevos stanza: you can pass --without-rdma to rpmbuild and it should not build the sub-package, use 'make -C extras/LinuxRPM srcrpm' to build the src.rpm
14:49 ndevos stanza: or, pass "--without=rdma" to "mock" for even cleaner builds ;-)
14:51 stanza ndevos: so I should modify the Makefile.in? passign the value in the ./configure command does not help... How can i pass that parameter to 'mock'?
14:53 gyadav joined #gluster
14:54 stanza ndevos: i am trying to install it on a AWS EC2 Linux AMI
14:55 ndevos stanza: no, there should not be any need to modify anything
14:55 ndevos stanza: can't you just use packages we provide? either from a repository on download.gluster.org or by enabling the repository from the CentOS Storage SIG?
14:57 ndevos stanza: to build, you just run "./configure && make -C extras/LinuxRPM srcrpm" that will get you the src.rpm and then you can "rpmbuild --rebuild --without-rdma ...src.rpm"
14:57 ndevos but, really, use the prebuilt RPMs if you can
14:57 stanza ndevos: i have tried, but i am not confident with yum repos. i landend here https://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo, i've seen that repos has moved but cannot find how to add it. ca you send me the link to NAME.repo so i just need to import it?
14:59 sunny joined #gluster
14:59 shyam joined #gluster
15:00 susant joined #gluster
15:01 ndevos stanza: you will need to enable CentOS Extras and then you can install the centos-release-gluster312 package that enables the yum repository, then you can install the glusterfs-server RPM through yum
15:04 stanza ndevos: thanks a lot!
15:05 ndevos stanza: if you have the steps to enable the repository, you may want to add that to one of the documents on http://docs.gluster.org/
15:05 glusterbot Title: Gluster Docs (at docs.gluster.org)
15:07 stanza ndevos: for sure, I'm trying to figure it out, once done i will share. BTW, I agree that install from pre-built rpm is the right choiche.. thanks
15:10 phlogistonjohn joined #gluster
15:10 skylar1 joined #gluster
15:13 arpu joined #gluster
15:21 ThHirsch joined #gluster
15:33 farhorizon joined #gluster
15:35 ivan_rossi left #gluster
15:40 dimitris joined #gluster
15:43 dimitris hi, we're having a problem with our geo-replicated cluster
15:43 dimitris files are created on the slave with 0 bytes
15:43 dimitris while the master side looks fine
15:44 dimitris could anyone provide a hint for troubleshooting?
15:47 dimitris not all files have this problem, so geo-replication is working, at least partially
15:47 om2 joined #gluster
15:48 aravindavk joined #gluster
15:51 gyadav_ joined #gluster
15:54 FinalX is there really no way to have multiple slaves for a geo replication master? :( it seemed like such a great idea to set it up
15:55 rafi joined #gluster
15:57 Klas just a pure guess
15:57 Klas geroeplica produces quite a lot of load
15:57 Klas doing it towards multiple servers seems very costly
15:58 Klas I assum you could make a chain of georeplicas though
15:58 Klas Master->Slave0->Slave1->Slave2
15:58 Klas seems kinda silly though
15:59 Klas what is your use-case?
15:59 FinalX I guess, but it's not what the docs show in images, and the docs also mention plural "slave nodes". The load on the master wouldn't be a problem at all though.
15:59 Klas the slave nodes are generally a gluster cluster
15:59 Klas you define the volume as normal on the slave, and then just push things to it
16:00 FinalX I have my physical, quite powerful server in Amsterdam. I have 4 VPS's in New York, Las Vegas, Luxembourg and Singapore. I have MariaDB w/ Galera Cluster set up, and need now some form of storage to replicate to all nodes so I can keep wordpress installs etc on there (with local reads).
16:01 Klas why not just use transaction log shipping?
16:01 FinalX I gave up on normal gluster replication because of the latency for synchronous writes (even with write cache), 4 minutes for serially writing 30 files (unzipping wordpress' latest.zip) is not gonna work
16:01 FinalX Klas: I'm not familiar with that
16:02 Klas https://mariadb.com/kb/en/library/binary-log/
16:02 glusterbot Title: Binary Log - MariaDB Knowledge Base (at mariadb.com)
16:02 FinalX oh, no, it's not about mysql. mysql replication, as said, works fine
16:02 wushudoin joined #gluster
16:02 FinalX I need my .php files, images etc. on all nodes :)
16:02 Klas ah, sorry
16:02 Klas misunderstodd =)
16:03 FinalX preferably with write on all nodes, and delayed / saved up writes are also fine (was looking at Halo for geo replication)
16:03 FinalX there will never be much updating on the data really.
16:03 Klas you are really looking for cdn solutions
16:03 Klas imo
16:03 FinalX nah, cdn's are for static stuff
16:04 Klas my point is, what is the point of the local access?
16:04 dimitris you can have different sessions for the same volume AFAIK
16:04 FinalX Updating from a master and it cascading down to other nodes is fine, preferably from 1 master to all slaves; chaining slaves I'd rather not do, but I guess it'd work, too.
16:05 FinalX dimitris: not according to the output I get
16:05 bwerthmann joined #gluster
16:06 FinalX I created a volume on the master called 'www', and did 'gluster volume geo-replication www lu::www create push-pem'. If I do 'gluster volume geo-replication www ny::www create push-pem', it tells me: 'Session between www and lu:www is already created! Cannot create with new slave:ny again!'
16:07 FinalX though at this rate I'm wondering if I should just rsync from the master node and keep the slave filesystems read-only (write to a location and bind mount that with 'ro' to where PHP & webserver read from).
16:08 Klas hmm, can't you just use clients actually?
16:09 FinalX I guess I could, but how well can I tune the read cache for them? as the latency is what I want it local for mostly
16:09 Klas ah
16:09 Klas so why not use caching proxies =P?
16:10 FinalX kinda defeats the point, and a lot of it is dynamicly generated..
16:10 Klas as long as you are talking read-only ;)
16:10 FinalX well the databases are not
16:10 FinalX those are fully master-master-master-master-master-master in this case :P
16:10 Klas hehe
16:11 Klas not sure gluster sounds like the right fit for you, it's really more of a "within a datacentre" type of solution imo
16:11 Klas and then slave for certain needs
16:12 FinalX yeah, for that it works/worked great :) geo also works fine but I'd need the ability for multiple slaves. though I might give your chaining idea a go.
16:12 FinalX I could chain from nl => lv => ny => lv => sg
16:12 Klas no idea if it works
16:13 Klas also, remember that slaving is asynchronous
16:13 FinalX according to the docs it should, but.. as I noticed before, those aren't always correct ;)
16:13 FinalX yeah, but that's fine really
16:13 Klas funnily enough, it really does sound like afs would be a good fit, btw =P
16:13 Klas good old AFS
16:14 Klas (can't recommend it though, not maintained at all and limited in many ways)
16:15 FinalX tbh I was even tempted to give mysqlfs a go, since that was replicated well and my dataset being small :p
16:15 FinalX (but my principles are giving me grief)
16:15 Klas it might not be a bad idea in this instance actually
16:15 Klas but, yeah, I feel you
16:15 Klas storing stuff in mysql sucks
16:16 Klas several other sql dialects do it way better
16:18 dimitris FinalX try with force flag
16:20 FinalX dimitris: no-go, "Geo-replication session between www and lu::www is still active. Please stop the session and retry.'
16:21 dimitris well, can you try that?
16:21 dimitris I mean, stop the current session and add a new one
16:22 dimitris then start all sessions
16:22 Klas hmm, logical actually
16:22 FinalX yeah, doing so now; for some reason my brain immediately thought that it wouldn't let me anyway even if it was stopped. heh.
16:22 FinalX nope, it tossed the old one and replaced it with the new slave
16:23 dimitris strange this, it should support multiple sessions
16:24 dimitris even docs show it as a possible scenario (multi-site cascading)
16:24 FinalX cascading and directly, even
16:25 FinalX http://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/  => 'Multi-site cascading Geo-replication' => bottom picture shows 2 slaves (Site B + Site C), and then to two slaves each again.
16:25 glusterbot Title: Geo Replication - Gluster Docs (at docs.gluster.org)
16:26 dimitris yes
16:27 FinalX but the rest of that article just seems to mention multiple volumes, and one slave
16:27 dimitris geo-rep is set per volume
16:28 dimitris so maybe it implies setting different slaves for different volumes
16:28 jcall joined #gluster
16:28 dimitris but that's speculation from my side
16:29 FinalX yeah, does seem so. and it's mostly meant as an offsite failover, I guess. would've been excellent for this purpose though :P
16:32 dimitris sorry I can't help any further
16:33 Klas FinalX: I'm using it for backup purposes ;)
16:34 FinalX Klas: yeah, looks excellent for that :)
16:35 Klas client took 12 hours a day, 6 with nfs, this goes way faster =P
16:35 Klas lots and lots of small files =P
16:36 Klas still takes about three hours I think
16:36 FinalX I'm using ZFS locally on my server, so I can just sync snapshots over :) and big backups I do to Google Drive with rclone. haven't had the need for a networked/replicated filesystem in a while. glusterfs looked like a great fit, but seems it wasn't meant to be :)
16:36 Klas yeah, I can understand why
16:41 kpease joined #gluster
16:41 rwheeler joined #gluster
16:58 atinm joined #gluster
16:59 owlbot joined #gluster
17:03 pasqualeiv joined #gluster
17:04 pasqualeiv @glusterbot - are you back from your conference?
17:05 lakerrman joined #gluster
17:06 lakerrman Hey, I just installed 312 and tried to do HA with ganesha, but its gone. Where's the howto for storhaug?
17:27 gyadav_ joined #gluster
17:32 msvbhat joined #gluster
17:33 lakerrman With 312, I can't do plain NFS either. I get this error in the log file 0-xlator: /usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared object file: No such file or directory
17:33 lakerrman any help would be great
17:35 farhorizon joined #gluster
17:52 atinmu joined #gluster
17:55 JoeJulian @later tell lakerrman No idea what storhaug is, so can't help there. HA Ganesha was documented at docs.gluster.org. The latter problem is because you do not have some package installed that adds that library file.
17:55 glusterbot JoeJulian: The operation succeeded.
17:59 ahino joined #gluster
18:04 mallorn Does anyone here understand how locking relates to the healing process in a distributed-disperse volume?  Our files won't heal unless I set features.locks-revocation-max-blocked=1, then change it zero again.  That effectively clears all locks (which crashes a bunch of clients using the filesystem), and then we can heal a handful of files.  Rinse, repeat.
18:05 mallorn Otherwise it just keeps copying the files to the server that's healing, and once it reaches the end it backs off to a certain point and starts over again.  We've had files healing for a month now.
18:08 JoeJulian I know how it works in replicated volumes, but I've never looked in to disperse.
18:08 JoeJulian What version is this?
18:09 mallorn 3.10.1.
18:10 mallorn We weren't able to upgrade to 3.12 because it came out after the school year had already started, so we're waiting until winter break to do it (I work for a University).
18:12 mallorn We also can't run a 'gluster volume heal [volname] info' command -- it takes twenty minutes or so to complete.  If I clear the locks it only takes a few seconds afterwards.  I've tried it with both features.lock-heal on and off.
18:20 farhorizon joined #gluster
18:37 Guest12531 joined #gluster
18:37 JoeJulian mallorn: I'm looking but $dayjob keeps interrupting me. :)
18:41 mallorn :)  OK, thanks!
18:51 arif-ali_ joined #gluster
18:55 gbox JoeJulian: You seem to have a KISS approach to gluster, which has a KISS approach to distributed storage.  Do you use the newer features like Dispersed/EC & Tiering much?
18:57 JoeJulian gbox: I do not.
18:57 JoeJulian Though EC and Tiering do interest me.
18:57 JoeJulian EC would require sufficiently small shards to be of any value though.
19:00 gbox gbox:  Good to know so I don't keep asking about those!  Keeping rocking the KISS approach!
19:00 gbox s/ing//
19:00 glusterbot What gbox meant to say was: gbox:  Good to know so I don't keep ask about those!  Keeping rocking the KISS approach!
19:01 gbox Ha regex fail.  I am digging in to the vol files to understand the graphs and should know more about tiering by the end of the month!
19:03 JoeJulian That's an excellent way of learning how these work.
19:05 JoeJulian mallorn: I'm not seeing anything obvious. I would recommend opening a bz. Add a state-dump and the glustershd.log. Drop the link here and I'll look at it more after lunch. Btw... keep in mind I'm not one of the devs. Most of them are in the eastern hemisphere so they might look overnight.
19:09 mallorn OK, thanks!
19:11 rwheeler joined #gluster
19:14 _KaszpiR_ joined #gluster
19:45 map1541 joined #gluster
19:47 int_0x21 Does anyone know of any place with some best practise for using gluster as a vmware datastore, im having hard time getting good info
19:48 int_0x21 Feature vise i can se everthing i need with ganesha-nfs and gluster-block, Those tools are brilliant for multipath
19:48 dimitris joined #gluster
19:49 int_0x21 But performance wise im at a loss, im getting something like 200MB/sec using 8 nvme disks and 100GB backbone network with 25GB/sec at vm host
19:49 dimitris joined #gluster
19:53 major joined #gluster
20:13 vbellur joined #gluster
20:13 vbellur joined #gluster
20:14 vbellur1 joined #gluster
20:15 vbellur joined #gluster
20:15 vbellur joined #gluster
20:20 bobloblian joined #gluster
20:21 vbellur joined #gluster
20:29 plarsen joined #gluster
20:48 jkroon joined #gluster
20:50 vbellur joined #gluster
21:12 JoeJulian int_0x21 Is this native nfs or ganesha?
21:15 JoeJulian Also https://access.redhat.com/documentation/en-us/red_hat_storage/3.1/html/configuring_red_hat_enterprise_virtualization_with_red_hat_gluster_storage/chap-hosting_virtual_machine_images_on_red_hat_storage_volumes has the recommended settings for vm hosting.
21:15 glusterbot Title: Chapter 4. Hosting Virtual Machine Images on Red Hat Gluster Storage volumes - Red Hat Customer Portal (at access.redhat.com)
21:21 MrAbaddon joined #gluster
21:24 int_0x21 JoeJulian, ganesha but its also gluster-block
21:24 vbellur joined #gluster
21:24 int_0x21 gluster block leaves me at about 180MB/sec and ganesha 200MB
21:25 int_0x21 I tried those setting and no diffrence :( thanks though. Or do i need to stop and start the volume between changes ?
21:26 JoeJulian You shouldn't need to, but for the sake of benchmarking I would.
21:29 int_0x21 I will try that tomorrow. So far gluster seems like a perfect fit if i can work this out :)
21:29 vbellur joined #gluster
21:30 JoeJulian I'd be interested in looking at context switches on the server. I'm wondering if you're running up against that as a limitation.
21:30 farhorizon joined #gluster
21:30 farhorizon joined #gluster
21:31 JoeJulian I'm really interested in this as well. I'm helping one of my team who's trying to engineer storage to accept 500k 16kB files per second using gluster and nvme.
21:38 vbellur1 joined #gluster
21:39 vbellur1 joined #gluster
21:41 vbellur1 joined #gluster
21:42 vbellur joined #gluster
21:44 farhorizon joined #gluster
21:57 int_0x21 Im heading to sleep now been a long day but during the testing phase of this storage im more then happy to share results
21:58 int_0x21 My workload will be vm:s and sql databases (and some backup,logs and other bulk storage on spinners)
21:58 int_0x21 Im not realy counting the bulk storage since i dont have any performance requierments on those but the database and vm access needs to be quite good
22:00 int_0x21 Hardware atm is 2 of https://www.supermicro.nl/products/system/1U/1029/SYS-1029U-TN10RT.cfm
22:00 glusterbot Title: Supermicro | Products | SuperServers | 1U | 1029U-TN10RT (at www.supermicro.nl)
22:00 int_0x21 5 960G nvme in each
22:00 int_0x21 100GB mellanox 5 en cards
22:01 int_0x21 But anyway bedtime now :)
22:14 vbellur joined #gluster
22:16 vbellur1 joined #gluster
22:21 vbellur joined #gluster
22:35 map1541 joined #gluster
22:35 rwheeler joined #gluster
22:42 plarsen joined #gluster
22:54 arpu joined #gluster
22:55 protoporpoise joined #gluster
23:08 Wizek_ joined #gluster
23:18 sadbox joined #gluster
23:23 David_H_Smith joined #gluster
23:30 David_H_Smith joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary