Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 major okay .. curious .. what is this need for Python.h that this compile keeps failing?
00:01 vinurs joined #gluster
00:01 major would prefer to not keep --disable'ing things .. but ..
00:02 major 1.5 hours till train ....
00:02 major like .. the race is on and stuffages
00:14 cyberbootje1 major, any clue if you can run glusterfs client without fuse?
00:15 major ...
00:15 major I am gonna wager that would be a not-so-much..
00:15 major like .. the glusterfs fuse client?
00:15 major you could do the nfs client .. but ... yah
00:16 cyberbootje1 yeah, the client but with the HA functions, without fuse :-)
00:18 cyberbootje1 probably not possible
00:19 major soo .. ceph?
00:22 major and .. compiling the zfs tree
00:22 major wont do anything .. original patch just added theoretical interfaces for .. something
00:28 * major twitches..
00:29 plarsen joined #gluster
00:32 major and .. that compiles
00:48 baber joined #gluster
01:03 major well .. gonna be a 2hr drive back south .. can see about trying to actually call the zfs interfaces and figure out how well .. if at all .. that they integrate with the existing lvm code
01:03 major who knows .. it might actually work
01:12 cyberbootje1 does anyone know a filesystem that does caching?
01:12 major ...
01:12 major nfs?
01:12 cyberbootje1 i mean, like zfs
01:12 zakharovvi[m] joined #gluster
01:12 major all of them?
01:13 major seems you sort of have to go out of your way to sync passed the buffer caches and such
01:13 cyberbootje1 ok, different approach :D where i can assign some SSD's to do the caching
01:13 cyberbootje1 zfs can do it but i'm trying to avoid zfs now
01:14 major ... in like a gluster tier?
01:14 cyberbootje1 that would be fine too
01:14 major maybe ... http://blog.gluster.org/2016/03/automated-tiering-in-gluster/
01:15 cyberbootje1 interesting
01:15 cyberbootje1 stable ?
01:28 nh2 joined #gluster
01:34 major Not used it yet.
01:39 major I suspect JoeJulian has though... or is at least fairly familiar with it..
01:40 cyberbootje1 oh then let's see when he's available :-)
01:44 vinurs joined #gluster
02:12 major right
02:12 major and .. found the train
02:12 major well .. sort of the train
02:39 kramdoss_ joined #gluster
02:46 major I am generally going to assume that none of this ZFS code was ever even run..
02:51 derjohn_mob joined #gluster
02:53 rastar joined #gluster
03:32 shdeng joined #gluster
03:33 nbalacha joined #gluster
03:44 MrAbaddon joined #gluster
03:47 magrawal joined #gluster
04:02 atinm joined #gluster
04:07 itisravi joined #gluster
04:13 skumar joined #gluster
04:15 nishanth joined #gluster
04:18 apandey joined #gluster
04:31 Saravanakmr joined #gluster
04:34 gyadav joined #gluster
04:35 buvanesh_kumar joined #gluster
04:38 aravindavk joined #gluster
04:41 Shu6h3ndu joined #gluster
04:46 jiffin joined #gluster
04:47 RameshN_ joined #gluster
04:48 ankitr joined #gluster
04:50 riyas joined #gluster
04:51 Prasad joined #gluster
04:59 rafi joined #gluster
05:08 BitByteNybble110 joined #gluster
05:16 prasanth joined #gluster
05:16 ppai joined #gluster
05:19 karthik_us joined #gluster
05:20 ndarshan joined #gluster
05:23 skoduri joined #gluster
05:34 msvbhat joined #gluster
05:43 rjoseph joined #gluster
05:47 RameshN joined #gluster
05:48 kdhananjay joined #gluster
05:49 susant joined #gluster
05:54 ankitr_ joined #gluster
05:55 susant left #gluster
05:58 sbulage joined #gluster
06:17 hgowtham joined #gluster
06:21 riyas joined #gluster
06:22 Karan joined #gluster
06:26 kramdoss_ joined #gluster
06:27 poornima joined #gluster
06:27 ashiq joined #gluster
06:31 kotreshhr joined #gluster
06:32 skoduri joined #gluster
06:33 Philambdo joined #gluster
06:35 hgowtham joined #gluster
06:36 chris349 joined #gluster
06:38 Klas I'm trying to understand a few things about geo-replication.
06:38 Klas First, is geo-replication not a function of a volume, but a function of a server?
06:38 Klas Second, it states it uses rsync, does this mean that it's basically just a cron-job running rsync?
06:38 rafi1 joined #gluster
06:39 skumar_ joined #gluster
06:49 nishanth joined #gluster
06:52 kramdoss_ joined #gluster
06:52 karthik_us joined #gluster
06:56 sona joined #gluster
06:58 itisravi joined #gluster
07:01 mbukatov joined #gluster
07:04 ankush joined #gluster
07:04 k4n0 joined #gluster
07:12 gyadav joined #gluster
07:17 jtux joined #gluster
07:21 Abazigal joined #gluster
07:28 [diablo] joined #gluster
07:32 rastar joined #gluster
07:35 gyadav joined #gluster
07:47 rafi1 joined #gluster
07:47 msvbhat joined #gluster
07:48 ivan_rossi joined #gluster
07:50 kdhananjay joined #gluster
08:01 skoduri joined #gluster
08:16 mhulsman joined #gluster
08:24 mhulsman1 joined #gluster
08:30 zakharovvi[m] joined #gluster
08:31 rastar joined #gluster
08:46 john2 joined #gluster
08:48 ankush joined #gluster
08:52 fsimonce joined #gluster
08:57 karthik_us joined #gluster
09:01 prasanth joined #gluster
09:02 derjohn_mob joined #gluster
09:04 flying joined #gluster
09:04 ShwethaHP joined #gluster
09:08 k4n0 joined #gluster
09:09 Klas seems like geo-replication requires a gluster volume, can you even create a gluster volume with just one server?
09:10 itisravi joined #gluster
09:19 mhulsman1 joined #gluster
09:24 Klas cloph: I just noticed I missed part of what you wrote the other day about georeplication to other volume, so, basically, is it possible to create a one-node volume?
09:25 itisravi_ joined #gluster
09:25 pulli joined #gluster
09:30 rafi1 joined #gluster
09:39 buvanesh_kumar joined #gluster
09:40 p7mo_ joined #gluster
09:45 rastar joined #gluster
09:51 jiffin Klas: u can create one node volume
09:51 jiffin but not recommended
09:55 Seth_Karlo joined #gluster
09:56 Seth_Kar_ joined #gluster
10:07 cyberbootje1 JoeJulian, any clue if automated tiering is stable and if it has positive effect on using it for VM images?
10:12 skumar joined #gluster
10:14 karthik_us joined #gluster
10:17 mhulsman joined #gluster
10:17 rafi1 joined #gluster
10:19 mhulsman1 joined #gluster
10:20 k4n0 joined #gluster
10:33 ankush joined #gluster
10:41 rastar joined #gluster
10:44 Klas jiffin: how?
10:44 Klas in this case, it should be fine
10:44 Klas this volume will only be used for geo-replication for backup (not even restore) purposes
10:58 arpu joined #gluster
11:00 Seth_Karlo joined #gluster
11:05 jiffin Klas: I guess for georep both master and slave should be in same configuration, what I tend to say was using gluster u can create a single brick (node) volume
11:05 Klas yes, and that is what I'm asking how to do it =)
11:06 Klas I do not want to use the georeplicated node in a way that HA is an issue
11:06 Klas thus, a single brick would be fine
11:06 Klas and using any more would be a wazte
11:06 Klas and, partly, defeat the purpose which I am trying to accomplish
11:10 chris349 joined #gluster
11:15 ira joined #gluster
11:18 mhulsman joined #gluster
11:22 jiffin Klas: you can check with kotreshhr or aravindavk
11:26 aravindavk Klas: Geo-replication uses Rsync but change detection is done using Gluster Changelogs(Metadata Journal), rsync will get list of files to sync. Geo-replication syncs files using two step operation, entry operation and data operation. In entry operation Geo-rep creates empty file in slave with same GFID as Master, then it feeds the GFID list to rsync jobs which will sync the data.
11:27 aravindavk Klas: since Geo-rep maintains same GFID in both Master Volume and slave Volume, it expects Slave volume is also an Gluster Volume.
11:28 Klas nice
11:28 Klas aravindavk: only one question left then, is a single-node volume possible?
11:29 aravindavk Klas: technically possible, but if that node goes down then volume will not be available
11:32 mhulsman joined #gluster
11:33 Klas I am aware
11:33 Klas that is fine
11:33 Klas I just want to know how
11:33 Klas this georeplicated volume will only be used for non-critical purposes
11:34 Klas basically, I wouldn't care if it wasn't a volume, at all
11:34 Klas except that it's needed for georeplication
11:36 Klas or, do you mean that the entire volume fails if georeplication fails?
11:37 ivan_rossi joined #gluster
11:38 Prasad_ joined #gluster
11:56 msvbhat joined #gluster
11:58 Klas aravindavk: sooo, about that answer ;)?
12:19 mhulsman joined #gluster
12:34 Prasad__ joined #gluster
12:35 kramdoss_ joined #gluster
12:47 kpease joined #gluster
12:47 jtux joined #gluster
12:47 Clone heya, gluster 3.8.5 2 node with arbiter setup. Currently, we have 30000 files that need healing and gluster is complaining with; remote operation failed. Path: (null) [No space left on device] but there is still 219GB free on the brick. I see a _lot_ of open file descriptors by gluster, even with all clients disconnected. What would be the best way get gluster to release these? Stop the volume?
12:48 kpease_ joined #gluster
12:49 Klas checked inodes?
12:49 Klas on both servers and arbiter?
12:49 Klas df -i
12:49 skoduri joined #gluster
12:49 Clone Klas: you mean the deleted ones?
12:51 Klas not really, just thinking about common ways to see why a system complains about lack of space
12:51 Clone ah, we increased the inodes already, this is xfs.
12:51 Clone so that's not it.
12:51 plarsen joined #gluster
12:52 Clone I think that we have fd open on deleted files.
12:52 ira joined #gluster
13:02 john2 hi, after upgrading from 3.8 to 3.10 auth.allow stopped to work: whatever I put in there, no client can connect. did something change in respect to source IP or whatever? Can't manage to make it back, even playing with auth.reject...
13:02 john2 whatever except '*' ofc
13:03 Clone Klas: you're right.. arbiter is 100%.. I thought this was fixed. ++
13:04 jiffin john2: Just curious  which client are you using ?
13:04 john2 jiffin: native
13:06 jiffin john2: what abt new volumes?
13:08 john2 jiffin: you mean I shall create a new one and test? worth a try
13:08 jiffin john2: if possible I suggest you to do that
13:09 kpease joined #gluster
13:12 fcoelho joined #gluster
13:12 d0nn1e joined #gluster
13:13 sona joined #gluster
13:17 unclemarc joined #gluster
13:25 shyam joined #gluster
13:26 john2 jiffin: (was interrupted) same problem with a small test volume
13:27 msvbhat joined #gluster
13:27 john2 I feel a little stupid, but I triple checked IP address and tested also with hostname
13:28 mhulsman1 joined #gluster
13:32 mhulsman joined #gluster
13:33 Klas ah, creating a volume on single node was as easy as:
13:33 Klas "gluster vol create ${volume_name} ${servername}:${path}"
13:34 Klas Clone: It's a very easy thing to forget to check =)
13:36 john2 hum, even created a volume of a new name (not "test', may have been some rests like volfile or whatever), same issue
13:40 baber joined #gluster
13:43 kramdoss_ joined #gluster
13:44 nbalacha joined #gluster
13:47 plarsen joined #gluster
13:55 ankush joined #gluster
14:01 Saravanakmr joined #gluster
14:09 cloph meh - trying to read up re small-file feature that StormTide was discussing, only to find out that his pastes did already expire. :-(
14:10 derjohn_mob joined #gluster
14:12 cloph @whatis php 2
14:12 glusterbot cloph: It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
14:19 cloph Klas: as answered: single node/peer/brick volumes for use with geo-replication are no problem. You need to be aware though that geo-replication has problems with some pattern of symlinks
14:19 cloph this will make the geo-rep session go into faulty mode and not sync anymore. Also it is horrible at hybrid crawl, i.e. when trying to do the initial sync of a volume that already has data on it (and if that data changes fast).
14:21 cloph if the geo-rep session fails, the master and slave volumes don't care, you could access the files nevertheless (of course the slave then wouldn't have the current data). And also you must not change files on the slave yourself, otherwise it will likely break geo-replication (when it happens to creategfid clashes)
14:22 jiffin joined #gluster
14:33 kramdoss_ joined #gluster
14:37 skylar joined #gluster
14:45 saali joined #gluster
14:45 Klas sounds like what I expected
14:45 Klas the datasets will be, at most 500-1000 MBs a day or so
14:45 Klas additional data
14:45 Klas and few changes
14:46 Klas however, currently, it fails to sync data at all
14:46 Klas ssh permisssions
14:58 Seth_Karlo joined #gluster
14:59 masber joined #gluster
15:03 plarsen joined #gluster
15:03 ppai joined #gluster
15:05 kotreshhr left #gluster
15:20 jtux joined #gluster
15:32 Shu6h3ndu joined #gluster
15:32 danielitit_ joined #gluster
15:36 sbulage joined #gluster
15:43 oajs joined #gluster
15:45 farhorizon joined #gluster
15:51 rwheeler joined #gluster
15:51 vito joined #gluster
15:52 vitor joined #gluster
15:52 vitoreiter joined #gluster
15:53 vitoreiter I have just begun researching GlusterFS after being a long time FreeNAS user. I'm looking into the possibility of having a zpool across two physical servers and was wondering if this software makes that at all possible?
15:55 Wizek_ joined #gluster
15:56 cloph you need to be more specific on what zpool would have to do with gluster in your szenario
15:57 snehring vitoreiter: no not really
15:57 snehring you would have two distinct zpools if you were going to run zfs under gluster
15:57 snehring on two servers anyway
15:57 vitoreiter I suppose it's hard to explain, I'd like to use ZFS and GlusterFS together. If I has two physical servers and wanted to use one NFS share to one zpool that's not possible?
15:57 vitoreiter Gotcha.
15:58 snehring you could have one glusterfs volume that runs on those zpools though
15:58 snehring that you could present as a single nfs share
15:58 vitoreiter Ahh, so that is actually what I'm looking to go for.
15:59 vitoreiter I'll keep looking into it then. Thanks a bunch.
15:59 snehring np
16:07 farhorizon joined #gluster
16:08 wushudoin joined #gluster
16:16 ivan_rossi left #gluster
16:28 level7 joined #gluster
16:33 vbellur joined #gluster
16:34 vbellur joined #gluster
16:34 vbellur joined #gluster
16:35 vbellur joined #gluster
16:35 vbellur joined #gluster
16:36 vbellur joined #gluster
16:42 atm0sphere joined #gluster
16:54 vbellur joined #gluster
17:00 nishanth joined #gluster
17:24 atm0sphere joined #gluster
17:29 melliott joined #gluster
17:38 RustyB_ joined #gluster
17:39 sage_ joined #gluster
17:39 xavih_ joined #gluster
17:39 armin_ joined #gluster
17:39 kkeithle joined #gluster
17:39 pasik_ joined #gluster
17:40 squeakyneb_ joined #gluster
17:40 siel_ joined #gluster
17:40 siel_ joined #gluster
17:41 yawkat` joined #gluster
17:42 tg2_ joined #gluster
17:42 iopsnax joined #gluster
17:42 anoopcs_ joined #gluster
17:43 sona joined #gluster
17:43 mdavidson joined #gluster
17:43 csaba joined #gluster
17:44 AppStore joined #gluster
17:59 jiffin joined #gluster
18:03 jiffin joined #gluster
18:12 rastar joined #gluster
18:14 jiffin1 joined #gluster
18:15 chris349 joined #gluster
18:15 atm0sphere joined #gluster
18:19 moneylotion can anyone help, im only getting like 15-20 MB/s reads/writes on replica volume w/ 2 bricks
18:19 moneylotion does this make sense, I would think I could get atleast 50 MB/s
18:19 moneylotion over 1 gbe
18:20 jiffin joined #gluster
18:23 jiffin1 joined #gluster
18:29 jiffin joined #gluster
18:31 jiffin joined #gluster
18:34 klaas joined #gluster
18:35 klaas joined #gluster
18:38 Gambit15 moneylotion, gluster volume info <vol>
18:38 Gambit15 @paste
18:38 glusterbot Gambit15: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
18:39 msvbhat joined #gluster
18:39 Gambit15 Uh, I generally use fpaste.org
18:39 moneylotion @gambit15 - http://pastebin.com/mnEkjQvu
18:43 Gambit15 moneylotion, stop Gluster, do an iperf test between the servers & do an IO test directly on the storage device (rather than via gluster & fuse)
18:43 jiffin joined #gluster
18:46 moneylotion @gambit15 - iperf 942 Mbits/sec, fio write tests 400MB/s w/o gluster, 20 MB/s with gluster
18:47 Karan joined #gluster
18:49 vbellur joined #gluster
18:50 jiffin joined #gluster
18:52 Gambit15 Remember to disable caching in the kernel before testing on the raw device, that could mislead somewhat.
18:52 Gambit15 I just use dd...
18:53 Gambit15 Bandwidth:
18:53 Gambit15 dd if=/dev/zero of=test.dd bs=1G count=1 conv=fdatasync
18:53 jiffin1 joined #gluster
18:53 Gambit15 IPOS/Latency: dd if=/dev/zero of=test.dd bs=4k count=10000 conv=fdatasync
18:53 moneylotion i'm using zfs, I have heard zfs ignores zeros
18:53 Gambit15 (you can vary the blocksize/bs in the above)
18:54 Gambit15 It uses the CPU more, but you could also use urandom
18:55 Gambit15 FWIW, different blocksizes can give very different results
18:55 moneylotion im still getting like 370 MB/s on the volume with dd
18:56 farhoriz_ joined #gluster
18:57 Gambit15 Now try it again with Gluster. See if you get much difference between one big sequential write (bandwidth) & lots of smaller ones (IOPS)
18:58 Gambit15 ...you'll probably want to reduce the values a bit, given it's already going slowly
18:58 Gambit15 And whilst you're at it, keep an eye on your server load & wait times
18:58 farhoriz_ joined #gluster
19:01 moneylotion 86 MB/s
19:02 moneylotion got pretty good network throughtput as well... about 700 mb/s across to the replicated node
19:02 moneylotion why would I get such slow performance then, for serving files, or samba performance?
19:03 ahino joined #gluster
19:05 Gambit15 You're mounting the volume via samba?
19:05 moneylotion i mount with the gluster client (in fstab), and then share with samba, or netatalk, or anything - I haven't seen 80 MB/s anywhere other then DD
19:06 Gambit15 I presume that 86MB/s was for the large file, rather than the many smaller ones?
19:06 moneylotion yeah
19:06 moneylotion 5 GB
19:07 moneylotion are large numbers of files difficult for gluster as well as smaller files??? *curious and interesting in the ins and outs
19:11 Gambit15 Filesize doesn't make much difference, it's the number of them. Lots of small reads & writes naturally incur more latency than large sequential r/w. Listing large directory structures has been its weakness, however I don't know whehter that's improved over the last couple of months.
19:14 Gambit15 "I haven't seen 80 MB/s anywhere other then DD" - the benchmarking tools you're using probably draw an average of sequential & non-sequential r/w. The DD method tests each one individually, which helps narrow down the issue
19:14 jiffin joined #gluster
19:16 Gambit15 Are you running these IO tests on the fuse mounted volume, or at the samba endpoint? I suggest you start with the first, to eliminate any issuess with samba
19:18 moneylotion I haven't done any real tests with samba endpoints
19:23 moneylotion @gambit15 what about latency - how do people use this with web services (ie facebook), for serving files w/ latency issues, are they most likely caching heavily?
19:23 jiffin joined #gluster
19:23 moneylotion I'm hoping to replicate an owncloud server
19:27 Gambit15 https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/
19:27 glusterbot Title: Optimizing web performance with GlusterFS (at joejulian.name)
19:30 moneylotion nice article - smart ideas
19:35 jiffin joined #gluster
19:35 ahino joined #gluster
19:36 Gambit15 I only use it for VM hosting, so a smaller number of large files, and it works brilliantly for that use.
19:45 jiffin joined #gluster
19:46 danielitit joined #gluster
19:46 danielitit joined #gluster
19:47 devcenter joined #gluster
19:57 Gambit15 moneylotion, sorry, phone call. To add to those comments, you should still be seeing better throughput than you're getting currently, so there's still more to investigate
19:59 Gambit15 Other than glusterd causing high load on the CPU, I'm not sure what else could be causing your slowdown. Someone else with more knowledge may be able to help with that
20:00 farhorizon joined #gluster
20:01 moneylotion *** only a two core pentium - that may be part of teh issue - I originally did a big migration to like 10 gluster mounts, and things really slowed down
20:09 jiffin joined #gluster
20:17 pioto joined #gluster
20:19 jiffin joined #gluster
20:19 derjohn_mob joined #gluster
20:25 pioto joined #gluster
20:26 pioto joined #gluster
20:28 pioto joined #gluster
20:43 ashka joined #gluster
20:43 ashka joined #gluster
21:00 major okay .. finally have 2 of the nodes up and doing tests against them .. figure I will beat the crap out of it this way for a few .. particularly while working on this experimental code .. before adding in more nodes
21:02 Vapez joined #gluster
21:49 Vapez_ joined #gluster
22:00 vbellur joined #gluster
22:13 ttkg joined #gluster
22:43 nishanth joined #gluster
23:27 barajasfab joined #gluster
23:43 CmndrSp0ck joined #gluster
23:47 CmndrSp0ck joined #gluster
23:48 CmndrSp0ck joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary