Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 jkroon joined #gluster
00:17 ashiq joined #gluster
00:58 Alghost joined #gluster
00:59 Alghost joined #gluster
01:09 Alghost joined #gluster
01:15 shdeng joined #gluster
01:16 jkroon joined #gluster
01:34 Lee1092 joined #gluster
01:39 plarsen joined #gluster
01:50 Alghost joined #gluster
01:50 aj__ joined #gluster
02:08 harish joined #gluster
02:18 hagarth joined #gluster
02:44 Javezim In a Replica 3 Arbiter 1, What happens if the Arbiter Volume goes offline?
02:44 Javezim Does the Metadata get written to the Replica nodes and then healed when the Arbiter comes back online?
02:55 m0zes joined #gluster
02:55 ic0n_ joined #gluster
02:57 kovshenin joined #gluster
03:02 johnmark joined #gluster
03:02 Gambit15 joined #gluster
03:06 samsaffron___ joined #gluster
03:07 swebb joined #gluster
03:18 magrawal joined #gluster
03:27 aravindavk joined #gluster
03:37 kramdoss_ joined #gluster
03:42 atinm joined #gluster
03:53 sanoj joined #gluster
03:55 riyas joined #gluster
03:58 kovshenin joined #gluster
03:58 thwam joined #gluster
04:04 Alghost joined #gluster
04:06 itisravi joined #gluster
04:13 aspandey joined #gluster
04:14 shubhendu joined #gluster
04:19 om Hi, having performance issues.  is it too much to list 640 files and directories in one directory to expect fast performance?
04:20 om I mean, when I run cmd ls -lah on a glusterfs volume directory with 640 files/dirs it takes over a minute to return output
04:21 om I checked how quick the glusterfs volume copies from on directory on that volume to another and it seems ok peaking at 22 MB/s
04:21 om a bit slow, but ok.
04:21 om but not the horrible ls -lah performance issue...
04:25 eightyeight so, trying to understand quotas. i just enabled quota, and set a directory limit of 10MB for testing, then wrote a 20MB file, and it succeeded
04:25 eightyeight shouldn't the hald limit prevent that from happening?
04:27 eightyeight *hard limit
04:29 eightyeight a pastebin of what i'm seeing: https://ae7.st/p/1cf
04:31 itisravi eightyeight: For strict enforcing, I think you'd need to set the hard and soft timeouts to 0 seconds
04:32 itisravi om: Examining the volume profile might give some clues.
04:33 eightyeight ah. didn't know there was a timeout
04:34 eightyeight `features.quota-timeout' is set to `0'
04:34 eightyeight isn't that cache size?
04:35 eightyeight well, a cache of directory sizes
04:37 kshlm joined #gluster
04:37 eightyeight glusterfs prevents me from writing anything else, with disk quota exceeded error, but i'm surprised that it let me create that 20MB file
04:46 raghug joined #gluster
04:47 itisravi enforcement is not real time unless timeouts are zero. see https://www.gluster.org/pipermail/gl​uster-users/2016-January/024999.html
04:47 glusterbot Title: [Gluster-users] Quota list not reflecting disk usage (at www.gluster.org)
04:48 itisravi Not quota-timeout. `gluster volume quota` will give you the list of available options.
04:48 om what profile itisravi
04:48 om ?
04:49 om I think I may have before, but found nothing interesting...
04:50 itisravi om: https://gluster.readthedocs.io/en/latest/Ad​ministrator%20Guide/Monitoring%20Workload/
04:50 glusterbot Title: Monitoring Workload - Gluster Docs (at gluster.readthedocs.io)
04:50 itisravi It can give you an idea of what fops and their latencies.
04:51 karthik_ joined #gluster
04:52 jiffin joined #gluster
04:54 eightyeight ah. i see. thx
04:54 eightyeight what is the default hard-timeout?
04:55 Bhaskarakiran joined #gluster
04:55 nbalacha joined #gluster
04:56 eightyeight nvm
04:58 om itisravi: are these latencies in ms?
04:58 om https://gist.github.com/andrebron​/b86813097dee5bdbc4d7b29bb05febe0
04:58 glusterbot Title: glusterfs 3.7.14 latencies - profile · GitHub (at gist.github.com)
04:59 kovshenin joined #gluster
04:59 bkunal joined #gluster
05:00 ankitraj joined #gluster
05:01 masber joined #gluster
05:03 itisravi microseconds like it says in the output.
05:03 Saravanakmr joined #gluster
05:03 itisravi nothing unusual there. lookups seem to be having the most latency.
05:04 om yea, saw the lookups latency...
05:04 om perhaps performance has something to do with replication?
05:04 om this volume has 2 bricks in one data center and 2 bricks in another datacenter
05:05 itisravi could be. AFR sends lookups to all bricks of the replica.
05:05 * itisravi has to go.
05:08 om thanks itisravi !
05:10 devyani7 joined #gluster
05:13 RameshN joined #gluster
05:14 ankitraj joined #gluster
05:14 Alghost joined #gluster
05:15 delhage joined #gluster
05:16 Alghost joined #gluster
05:18 prasanth joined #gluster
05:18 fcoelho joined #gluster
05:21 prasanth joined #gluster
05:21 aravindavk joined #gluster
05:28 ndarshan joined #gluster
05:32 devyani7 joined #gluster
05:33 prasanth joined #gluster
05:38 ankitraj joined #gluster
05:38 atalur joined #gluster
05:42 karthik_ joined #gluster
05:43 kdhananjay joined #gluster
05:48 karnan joined #gluster
05:51 mhulsman joined #gluster
05:52 kotreshhr joined #gluster
05:55 Muthu_ joined #gluster
05:56 Saravanakmr hchiramm,  ping - Please check and merge - https://github.com/gluster/glusterdocs/pull/145
05:56 glusterbot Title: Create op_version.md by SaravanaStorageNetwork · Pull Request #145 · gluster/glusterdocs · GitHub (at github.com)
05:57 raghug joined #gluster
05:59 aspandey joined #gluster
05:59 masuberu joined #gluster
06:00 atalur joined #gluster
06:01 kovshenin joined #gluster
06:02 ehermes joined #gluster
06:02 hgowtham joined #gluster
06:06 mhulsman joined #gluster
06:13 skoduri joined #gluster
06:13 karthik_ joined #gluster
06:18 ramky joined #gluster
06:18 msvbhat joined #gluster
06:19 ankitraj joined #gluster
06:20 saltsa joined #gluster
06:22 jtux joined #gluster
06:29 ZachLanich joined #gluster
06:32 satya4ever joined #gluster
06:36 ppai joined #gluster
06:38 David_Varghese joined #gluster
06:41 arcolife joined #gluster
06:41 hchiramm Saravanakmr, there is a coment from ppai
06:45 Saravanakmr hchiramm, ok..checking
06:47 rafi joined #gluster
06:50 jtux joined #gluster
06:52 ashiq joined #gluster
06:57 fsimonce joined #gluster
06:58 atalur joined #gluster
06:59 kxseven joined #gluster
06:59 ankitraj joined #gluster
07:02 kovshenin joined #gluster
07:06 raghug joined #gluster
07:10 ivan_rossi joined #gluster
07:14 Manikandan joined #gluster
07:22 jri joined #gluster
07:22 ivan_rossi left #gluster
07:24 sanoj joined #gluster
07:29 pur joined #gluster
07:33 ankitraj joined #gluster
07:42 tdasilva joined #gluster
07:53 Larsen joined #gluster
07:56 David_Varghese joined #gluster
07:57 shdeng joined #gluster
07:59 ivan_rossi joined #gluster
08:00 [diablo] joined #gluster
08:00 kovshenin joined #gluster
08:01 [diablo] morning #gluster
08:01 [diablo] guys how do I place an ACL on volume please to specify the Gluster native clients that can connect to it please?
08:06 ankitraj joined #gluster
08:08 itisravi joined #gluster
08:19 kxseven joined #gluster
08:23 rastar joined #gluster
08:30 David_Varghese joined #gluster
08:39 hackman joined #gluster
08:39 ankitraj joined #gluster
08:45 jkroon joined #gluster
08:47 muneerse joined #gluster
08:49 atalur joined #gluster
08:57 muneerse joined #gluster
09:00 ankitraj joined #gluster
09:01 harish__ joined #gluster
09:01 ankitraj joined #gluster
09:02 shdeng joined #gluster
09:04 BitByteNybble110 joined #gluster
09:11 kshlm joined #gluster
09:19 jkroon joined #gluster
09:19 Bhaskarakiran joined #gluster
09:20 atinm joined #gluster
09:25 hackman joined #gluster
09:27 David_Varghese joined #gluster
09:42 hackman joined #gluster
09:43 atalur joined #gluster
09:44 rafi1 joined #gluster
09:48 LinkRage joined #gluster
09:49 atinm joined #gluster
10:07 itisravi joined #gluster
10:19 ashiq joined #gluster
10:20 David_Varghese joined #gluster
10:22 atinm joined #gluster
10:23 poornima joined #gluster
10:25 aj__ joined #gluster
10:30 shyam joined #gluster
10:32 aravindavk joined #gluster
10:40 kovshenin joined #gluster
10:42 kovshenin joined #gluster
10:52 ashiq joined #gluster
10:57 rafi1 joined #gluster
10:58 robb_nl joined #gluster
11:00 msvbhat joined #gluster
11:02 aravindavk joined #gluster
11:08 bluenemo joined #gluster
11:17 [fre] joined #gluster
11:23 kovshenin joined #gluster
11:23 [fre] Guys, currently our gluster-thin-pool is filled with one big thin logical volume used to create bricks on. this big LV is created with a BS specifically chosen for big file transfers. Now we'd like to create an additional thin logical volume to use with small files. Question is: how do I tell Gluster AND LVM that I want an additional LV? * Can I simply do an LVREDUCE -r? And an LVcreate -V -T? * What is the risk for my running data?
11:29 bkunal wha I understand is that, you have an existing thin-pool which is being consumed by a single LV and now you want to get some free space out from thin-pool and get that for a new LV...right?
11:30 Larsen_ joined #gluster
11:30 bkunal If yes, you can certainly do lvreduce to free some space from thin-pool. I assume that your LV is not full and has space so that it can be reduced
11:30 [fre] it is
11:31 bkunal once you have do lvreduce, you can create a new LV
11:31 bkunal once you have done lvreduce, you can create a new LV
11:31 [fre] bkunal, what's the impact on my data?
11:31 bkunal But I did not get you question : how do I tell gluster ......?
11:32 bkunal lvreduce will take care of blocks, it will free blocks which are unused
11:32 [fre] Well, those are all volumes meant to be used by gluster. Is gluster going to be aware of changed sizes?
11:33 [fre] bkunal, is there any meta-data kept by gluster, referring to those volumes?
11:33 bkunal do not reduce you LV beyond available free size, if you do then you might loose your data
11:34 [fre] it's 40TB, which I want to reduce to 20. Max 10TB used.
11:34 bkunal if you are modifying/reducing LV, you will need to reduce Filesystem as well, which file system you are using?
11:34 [fre] xfs
11:34 bkunal XFS does not have shrink option
11:34 bkunal you can not reduce your filesystem
11:35 [fre] I'm aware of that. yet, it's on a thin-volume.
11:35 bkunal if you are only reducing LV and not FS, there can be chance that you get block allocation failure
11:36 bkunal because you have over-committed FS
11:36 [fre] how does XFS handle thin-provisioning then?
11:37 ndevos [fre]: fstrim can be used to return unused blocks back to the VG, the size of the LV should decrease then
11:37 bkunal thin-provisioning doesn't provide you extra space.....even if you don't have in backend. In Thin-provisioning, block alocation/reservation takes place as n when needed
11:38 karnan joined #gluster
11:38 ndevos at least, I think it should, I thought there wasnt any need to do some lvm commands for it
11:38 [fre] bkunal,ndevos, how does the FS handles that?
11:39 ndevos [fre]: fstrim sends discards/unmap to the block device, previously used+freed blocks are then given back to the device
11:41 ndevos [fre]: thin provisioned LVs are just keeping pointers to blocks in the VG, when the fs informs the block-device the blocks are not needed anymore, the device (thin LV here) can then return them to the pool (VG)
11:41 B21956 joined #gluster
11:43 ndevos [fre]: mkfs only creates a superblock (and some copies) with some meta-data for the filesystem, it does not write (like zero-out) the whole size of the filesystem, it is similar to a huge sparse file with allocation-on-demand
11:43 Manikandan joined #gluster
11:44 [fre] ndevos, ok. So XFS is not really aware he's got 40TB available.
11:45 ndevos [fre]: the size of the thin-provisioned LV is what XFS sees that is available, XFS does not know what size the VG is that it could use
11:48 [fre] ndevos, do I get this write, if I want to procede with it: I do an fstrim /data && lvreduce && lvcreate ? I do need the lvreduce I presume, because else LVM wont know about the changed lv-size (whether is Thin or not).
11:48 [fre] write=right. :,)
11:49 lord4163 left #gluster
11:50 atinm joined #gluster
11:50 ndevos [fre]: uh, no, the size of the LV that you created should not be modified, if it is thin-provisioned, fstrim should return unused blocks back to the VG
11:51 ndevos [fre]: if you use lvreduce, it may free up parts of the LV that XFS uses and you may end up with data-loss and filesystem corruption
11:51 [fre] ok. makes sense.
11:51 [fre] let my try that first.
11:51 bkunal ndevos, here [fre] is trying to free some space in thin-pool to create a new LV. For this he will need to run lvreduce. But as it is formated with XFS and we can't reduce it. reducing lv won't give any benifit to him
11:52 [fre] so, never use XFS when doing gluster-thin-pool-brick-volumes. ;)
11:52 bkunal ndevos, [fre] so I don't think there is any direct way to achieve goal other than re-formatting
11:53 bkunal [fre], you can always keep size what you want, you always have option to grow fs, so later if you want additional space, you can grow.
11:53 post-factum what is the problem with fstrim?
11:56 bkunal post-factum, fstrim sends discards/unmap to the block device, previously used+freed blocks are then given back to the device but the usecase of [fre] is different. he want to free space in thinpool and create a new LV
11:57 post-factum fstrim will free space not used in thin pool
11:57 post-factum fstrim is not only about ssds ;)
11:57 mhulsman joined #gluster
11:58 d0nn1e joined #gluster
11:58 bkunal post-factum, how will filesystem understand that you have freed blocks and given it to different LV?
11:59 post-factum fs shouldn't understand it. fstrim will tell fs to pass info about unused blocks to block layer, but block layer is thin pool, and it will get the info about unused space... and will free it
11:59 shyam joined #gluster
11:59 bkunal post-factum, you should not have a filesystem bigger that the backing device, else you should be ready for block allocation failure
11:59 post-factum bkunal: wrong :)
12:00 bkunal post-factum, can you have 10GB file-system on top of 8 BG LV?
12:00 bkunal *GB
12:00 [fre] yes
12:00 post-factum you shouldn't fill it to the state exceeding available pool size
12:00 post-factum but that is how overprovisioning works
12:02 bkunal post-factum, you can create but you will not be able to use 10 GB of space ...for 2 GB you will get block allocation failure
12:02 post-factum sure
12:02 post-factum unless you use fs that supports compression
12:02 bkunal post-factum, right
12:05 mhulsman1 joined #gluster
12:10 Smoka joined #gluster
12:10 [fre] post-factum, bkunal, ndevos https://paste.ubuntu.com/23093031/
12:10 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
12:11 post-factum and...?
12:11 [fre] not working. :)
12:12 post-factum ???
12:12 bkunal [fre], what is not working
12:12 [fre] fstrim. ;)
12:13 post-factum it seems it works properly
12:13 post-factum you have 0.25% occupied
12:14 post-factum 0.233/97*100==0.25
12:14 [fre] post-factum, exactly. but... fstrim is not the solution to get my partition smaller. :)
12:14 post-factum [fre]: you do not need your partition to be smaller!
12:15 post-factum [fre]: feel free to create another lv and use it right now
12:15 post-factum [fre]: it is *thin* pool
12:16 [fre] (hush, I get it that's it thin. ;)) Give me a minute to try. I want to see it reflected.
12:16 [fre] I do get a warning, but it's made....
12:16 post-factum ye, you get a warning
12:17 post-factum so just do not let sum exceed total space
12:17 post-factum and enable discard
12:17 post-factum or do regular fstrim
12:18 [fre] lvs will never give me the right size in that case.
12:18 [fre] does it?
12:21 post-factum it gives it in %
12:23 itisravi joined #gluster
12:27 johnmilton joined #gluster
12:29 johnmilton joined #gluster
12:31 hackman joined #gluster
12:41 [fre] post-factum, to conclude: with XFS, lvreduce too risky. fstrim and monitor the usage when creating new LVs. With Ext4: fstrim, lvreduce -r and of we go. Right?
12:41 post-factum [fre]: no, you do not need lvreduce on thin pool
12:42 [fre] you'll never see LVM reflecting the correct lv-size.
12:42 DV_ joined #gluster
12:42 post-factum [fre]: you will using TRIM properly
12:43 [fre] okay, tell me. until yesterday evening I've never used it. :)
12:43 [fre] And looking for the gory details until I see it ;)
12:47 [fre] no, honestly... I see no reason why you shouldn't be able to tell LVM that your thin-volume now should not exceed 40TB instead of 80TB.
12:47 sandersr joined #gluster
12:48 [fre] https://wiki.gentoo.org/wiki/LVM​#Reducing_a_thin_logical_volume
12:48 glusterbot Title: LVM - Gentoo Wiki (at wiki.gentoo.org)
12:53 shubhendu joined #gluster
12:54 unclemarc joined #gluster
13:01 hackman joined #gluster
13:04 arcolife joined #gluster
13:10 jiffin1 joined #gluster
13:13 kramdoss_ joined #gluster
13:26 skylar joined #gluster
13:31 jobewan joined #gluster
13:42 kotreshhr left #gluster
13:43 nbalacha joined #gluster
13:45 Manikandan_ joined #gluster
13:47 plarsen joined #gluster
13:50 ndevos [fre]: the whole point of using thin-provisioned LVs is that the LVs do not have the block allocated during lvcreate, but more dynamically
13:51 ndevos [fre]: you just 'fake' a huge LV to the user (the filesystem, XFS), and only allocate the blocks that the user actually needs
13:51 post-factum ndevos: i believe i just had successfull pm chat about this with him
13:52 ndevos post-factum: ah, good, just unfortunate that other readers in this channel missed it :)
13:56 msvbhat joined #gluster
13:56 aravindavk joined #gluster
14:05 Sebbo1 joined #gluster
14:16 shyam joined #gluster
14:17 [fre] ndevos, from my previous storage-background, I do know the usage of thin-provisioning...
14:18 [fre] I just got stuck in a mindloop around XFS..
14:18 Manikandan_ joined #gluster
14:24 harish__ joined #gluster
14:24 post-factum ча
14:24 post-factum oops
14:24 post-factum i mean, xfs @ thin pool in vm recently surprised me
14:25 post-factum it failed in the middle of system upgrade, and then after reboot i got zeroed files
14:25 post-factum i thought this was fixed log time ago
14:30 [fre] Guys, thank you for the help. It's 4h30 CET, time to start the weekend!
14:33 kpease joined #gluster
14:38 kpease joined #gluster
14:38 jiffin joined #gluster
14:40 [diablo] afternoon guys... anyone using the Red Hat Gluster Storage Console please?
14:41 [diablo] Is it me, or is it impossible to create a new volume (and define a new brick for it) without it using a block device?
14:42 [diablo] I can certainly create a volume via the gluster command line tool, but the RHGSC won't play ball
14:50 mrEriksson joined #gluster
15:02 xMopxShell joined #gluster
15:04 yosafbridge joined #gluster
15:05 wushudoin joined #gluster
15:20 David_Varghese joined #gluster
15:23 shyam joined #gluster
15:27 dnunez joined #gluster
15:28 Gambit15 post-factum, JoeJulian, out of curiosity, it'd probably be in my interest to move from Dist+Rep to Dispersed in the future, when I've got more peers & a better storage network. Is it possibly to migrate to that, or would I have to create new volumes & copy the data across?
15:48 dnunez joined #gluster
15:49 post-factum Gambit15: i believe in-place migration is not possible, you will have to re-create volumes
15:50 Gambit15 Cool, all I needed to know. Cheers
15:50 Gambit15 Am I right in understanding the only "migration" possible is from distributed to distributed-replicated?
15:54 Gambit15 BTW, one other concern I've had, is that I'm building all of my bricks on single LVM volumes. That means that everything sits on the same filesystem (xfs). You reckon that could cause a problem, or be a weakness?
15:55 Gambit15 I know everything's rep'd, but it means that everything is at the mercy of a single journal... Perhaps that's a bit overly paranoid though
15:56 hchiramm joined #gluster
15:57 msvbhat joined #gluster
16:03 hackman joined #gluster
16:04 msvbhat joined #gluster
16:06 David-Varghese joined #gluster
16:11 post-factum "migration" possible is from distributed to distributed-replicated, yup
16:11 post-factum share xfs is not a problem
16:11 post-factum *shared
16:23 Gambit15 joined #gluster
16:28 harish joined #gluster
16:40 ldumont joined #gluster
16:42 jobewan joined #gluster
16:44 Manikandan_ joined #gluster
16:46 armyriad joined #gluster
16:47 ldumont Hey guys, I have a brand new glusterfs cluster on debian 8.5
16:47 ldumont I have an issue where one of the host is unable to mount to volume using the fstab method after booting.
16:47 rafi1 joined #gluster
16:47 ldumont I need to manually use mount -a in order to finally mount to volume.
16:51 Pupeno joined #gluster
16:58 ben453 joined #gluster
17:05 Gambit15 ldumont, the network interface used by gluster is coming up late perhaps?
17:06 Gambit15 Um, could someone decipher this for me please?
17:06 Gambit15 [root@v0 ~]# gluster volume create iso replica 3 arbiter 1 s0.dc0:/gluster/iso/brick s1.dc0:/gluster/iso/brick s2.dc0:/gluster/iso/arbiter s2.dc0:/gluster/iso/brick s3.dc0:/gluster/iso/brick s0.dc0:/gluster/iso/arbiter
17:06 Gambit15 volume create: iso: failed: The brick s0.dc0.:/gluster/iso/brick is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
17:07 Gambit15 I don't understand what it's trying to warn me about?
17:10 Pupeno joined #gluster
17:10 Gambit15 ...because I'm using the full path, which starts at root?
17:10 Manikandan_ joined #gluster
17:14 post-factum because you use a folder that is not under mountpoint
17:15 ivan_rossi left #gluster
17:19 Gambit15 Aff...bingo. Forgot to update fstab when I reinstalled the host
17:19 squizzi joined #gluster
17:20 jri joined #gluster
17:23 JoeJulian ... and that's why it warns you. :)
17:31 ashiq joined #gluster
17:51 gnulnx_ hey post-factum, any new findings?
17:52 post-factum i stuck with valgrind profiling
17:52 post-factum cannot get massif tool to work properly with gluster
17:52 post-factum and need Pranith assistance, but he is not here atm
17:54 post-factum or whoever has worked with valgrind massif
17:59 jiffin joined #gluster
18:01 karnan joined #gluster
18:05 Pupeno_ joined #gluster
18:18 pampan joined #gluster
18:21 Pupeno joined #gluster
18:35 baojg joined #gluster
18:36 snehring joined #gluster
18:53 baojg joined #gluster
18:58 Pupeno joined #gluster
19:00 baojg joined #gluster
19:01 baojg joined #gluster
19:03 kovshenin joined #gluster
19:13 johnmilton joined #gluster
19:15 baojg joined #gluster
19:19 metsuke joined #gluster
19:25 metsuke hey all, I'm looking at different DFS's for persistent storage (~250TB) and attaching volumes to thousands of docker containers.  Is gluster suited for such things?  I'm also looking at ceph
19:34 johnmilton joined #gluster
19:39 hackman joined #gluster
19:47 post-factum metsuke: workload?
19:54 metsuke post-factum: maybe 100 cpu cores constantly in use?
19:54 post-factum metsuke: i mean, i/o workload
20:04 Pupeno joined #gluster
20:05 metsuke post-factum: our max is 80Gbps due to network, but I doubt we'd even use half
20:06 metsuke we have ssd's for everything
20:09 post-factum metsuke: i mean, what is your real tasks for DFS?
20:09 post-factum *are
20:18 metsuke post-factum: web servers, databases, jenkins, development on java
20:21 metsuke each container or host will receive a volume on which to back up dbs, and use an s3 api to transfer objects and files
20:21 post-factum databases on gluster? i'd say, no way
20:22 post-factum backups are ok
20:22 post-factum everything else too
20:22 metsuke only backups, databases will be on the compute hosts
20:22 post-factum ok then
20:23 metsuke I'm just finding it hard to determine which DFS is really suited for which purposes
20:23 metsuke ceph seems to do similar things as well
20:24 post-factum yup
20:24 post-factum stability is the question now, at least, for me
21:31 JoeJulian I look at stability, recoverability, and worst-case, primarily, but whether or not I can train ops staff is a close second.
22:09 wadeholler joined #gluster
22:26 kovshenin joined #gluster
23:01 metsuke JoeJulian: and I'm assuming gluster works great for you then?
23:04 JoeJulian Yep, that's why I've been hanging out here helping people for over 7 years.
23:06 JoeJulian I had a ceph cluster which failed miserably. I can't blame ceph, I'm sure sage will be happy to hear, it was completely a hardware problem ( LSI SAS expanders in Wywinn OCP storage trays ) but what I was left with was random data scattered over got-knows-which drives - leaving it all unrecoverable. At least with gluster I can go to the disk and be very confident I can pull the customer data from it.
23:14 Gambit15 joined #gluster
23:18 bluenemo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary