Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-01-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 tdasilva joined #gluster
00:03 bbooth joined #gluster
00:16 B21956 joined #gluster
00:31 bbooth joined #gluster
00:43 alvinstarr1 joined #gluster
00:47 Shu6h3ndu joined #gluster
00:48 TBlaar2 joined #gluster
00:49 Shu6h3ndu joined #gluster
01:13 plarsen joined #gluster
01:33 phileas joined #gluster
01:44 jdossey joined #gluster
01:51 arpu joined #gluster
02:17 derjohn_mobi joined #gluster
02:28 farhorizon joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:10 gyadav joined #gluster
03:11 haomaiwang joined #gluster
03:13 haomaiwang joined #gluster
03:13 ashiq joined #gluster
03:16 ppai joined #gluster
03:20 farhorizon joined #gluster
03:33 magrawal joined #gluster
03:46 skumar joined #gluster
03:46 riyas joined #gluster
03:53 atinm joined #gluster
03:56 atinmu joined #gluster
04:13 haomaiwang joined #gluster
04:15 jdarcy joined #gluster
04:16 jdarcy joined #gluster
04:21 poornima joined #gluster
04:27 buvanesh_kumar joined #gluster
04:27 panina joined #gluster
04:33 k4n0 joined #gluster
04:38 Saravanakmr joined #gluster
04:39 Prasad_ joined #gluster
04:41 ppai joined #gluster
04:50 jiffin joined #gluster
04:55 atmosphe joined #gluster
04:56 rafi joined #gluster
04:57 victori joined #gluster
05:02 RameshN joined #gluster
05:05 jiffin1 joined #gluster
05:10 farhorizon joined #gluster
05:11 k4n0 joined #gluster
05:12 farhorizon joined #gluster
05:13 haomaiwang joined #gluster
05:15 karthik_us joined #gluster
05:20 msvbhat joined #gluster
05:22 jiffin joined #gluster
05:23 ndarshan joined #gluster
05:30 apandey joined #gluster
05:38 sbulage joined #gluster
05:41 prasanth joined #gluster
05:44 riyas joined #gluster
05:45 ksandha_ joined #gluster
05:53 RameshN joined #gluster
05:54 ankit_ joined #gluster
05:56 ankit__ joined #gluster
06:05 kdhananjay joined #gluster
06:07 victori joined #gluster
06:13 haomaiwang joined #gluster
06:14 victori joined #gluster
06:15 hgowtham joined #gluster
06:17 susant joined #gluster
06:18 kkeithley joined #gluster
06:19 ndevos joined #gluster
06:22 Philambdo joined #gluster
06:32 itisravi joined #gluster
06:36 rastar joined #gluster
06:36 susant joined #gluster
06:40 msvbhat joined #gluster
06:40 Saravanakmr joined #gluster
06:42 [diablo] joined #gluster
06:43 sanoj joined #gluster
06:45 karthik_us joined #gluster
06:46 farhorizon joined #gluster
06:54 kotreshhr joined #gluster
06:59 rafi joined #gluster
07:02 Debloper joined #gluster
07:03 rjoseph joined #gluster
07:03 kdhananjay joined #gluster
07:03 skumar joined #gluster
07:09 apandey joined #gluster
07:10 MikeLupe joined #gluster
07:12 panina joined #gluster
07:13 haomaiwang joined #gluster
07:26 jtux joined #gluster
07:34 BlackoutWNCT1 joined #gluster
07:38 shortdudey123 joined #gluster
07:39 rideh joined #gluster
07:44 mb_ joined #gluster
07:48 joshin joined #gluster
07:50 victori joined #gluster
07:52 ivan_rossi joined #gluster
07:59 joshin joined #gluster
08:05 TFJensen Hi guys, Still strugeling with 3 node not replicating. Getting this error on one of the hosts(nodes) "[xxxxx.xxxxx] Buffer I/O error on dev dm-6, logocal block xxxxxxxx, lost async page write"
08:08 Saravanakmr TFJensen, this seems like a kernel error
08:09 TFJensen I just check again, node 1 and 2 are getting this error
08:09 Saravanakmr TFJensen, you need to check the corresponding node's disk
08:09 TFJensen node 3 is just an arbiter
08:10 TFJensen I checked with mdadm and no errors
08:10 joshin joined #gluster
08:10 Guest89004 joined #gluster
08:11 Saravanakmr can you check at backend using fsck ? as I said this error comes from kernel and not related to Glusterfs
08:12 Saravanakmr you may want to check free memory, corresponding disk size..
08:12 TFJensen a lot of memory free
08:12 shutupsquare joined #gluster
08:12 TFJensen its a raid10 with 4 disks
08:13 shutupsquare joined #gluster
08:13 haomaiwang joined #gluster
08:13 jri joined #gluster
08:20 TvL2386 joined #gluster
08:24 bluenemo joined #gluster
08:24 Saravanakmr joined #gluster
08:24 joshin joined #gluster
08:28 Philambdo joined #gluster
08:29 mbukatov joined #gluster
08:30 fsimonce joined #gluster
08:30 riyas_ joined #gluster
08:32 RameshN joined #gluster
08:32 farhorizon joined #gluster
08:37 joshin joined #gluster
08:38 mhulsman joined #gluster
08:41 musa22 joined #gluster
08:41 mhulsman1 joined #gluster
08:42 sbulage joined #gluster
08:47 victori joined #gluster
08:48 alezzandro joined #gluster
08:48 rafi joined #gluster
08:50 RameshN joined #gluster
08:50 mbukatov joined #gluster
08:54 Humble joined #gluster
08:57 armin joined #gluster
09:01 joshin joined #gluster
09:03 flying joined #gluster
09:07 loadtheacc joined #gluster
09:08 susant left #gluster
09:11 prasanth joined #gluster
09:13 haomaiwang joined #gluster
09:13 joshin joined #gluster
09:17 alezzandro joined #gluster
09:19 pulli joined #gluster
09:23 pulli joined #gluster
09:23 joshin joined #gluster
09:26 shutupsquare joined #gluster
09:30 jwd joined #gluster
09:32 percevalbot joined #gluster
09:32 owlbot joined #gluster
09:33 Seth_Karlo joined #gluster
09:34 pulli joined #gluster
09:35 pulli joined #gluster
09:35 k4n0 joined #gluster
09:39 rafi joined #gluster
09:39 susant joined #gluster
09:40 marbu joined #gluster
09:44 musa22 joined #gluster
09:47 mbukatov joined #gluster
09:48 derjohn_mobi joined #gluster
09:48 victori joined #gluster
09:50 karthik_us joined #gluster
09:54 joshin joined #gluster
09:57 kotreshhr joined #gluster
10:02 bbooth joined #gluster
10:11 RameshN joined #gluster
10:13 haomaiwang joined #gluster
10:16 pulli joined #gluster
10:28 Gambit15 joined #gluster
10:30 susant joined #gluster
10:39 mhulsman joined #gluster
10:41 msvbhat joined #gluster
10:44 mbrumbelow joined #gluster
10:46 rafi joined #gluster
10:47 RameshN joined #gluster
10:58 karthik_us joined #gluster
10:59 jtux joined #gluster
11:00 mhulsman1 joined #gluster
11:03 mbrumbelow joined #gluster
11:08 ira joined #gluster
11:13 haomaiwang joined #gluster
11:13 k4n0 joined #gluster
11:15 k4n0 joined #gluster
11:18 k4n0 joined #gluster
11:30 dfs_victim joined #gluster
11:38 TvL2386 joined #gluster
11:58 TvL2386 joined #gluster
12:08 k4n0 joined #gluster
12:10 mhulsman joined #gluster
12:12 mbukatov joined #gluster
12:12 mhulsman1 joined #gluster
12:13 rwheeler joined #gluster
12:13 haomaiwang joined #gluster
12:19 mhulsman joined #gluster
12:28 kettlewell joined #gluster
12:31 BitByteNybble110 joined #gluster
12:33 nthomas_ joined #gluster
12:34 fang64 joined #gluster
12:39 ankit_ joined #gluster
12:42 mhulsman1 joined #gluster
12:46 alezzandro joined #gluster
12:48 victori joined #gluster
12:53 kotreshhr left #gluster
12:56 musa22 joined #gluster
12:58 unclemarc joined #gluster
13:01 ahino joined #gluster
13:05 mhulsman joined #gluster
13:07 joshin joined #gluster
13:07 joshin joined #gluster
13:07 B21956 joined #gluster
13:08 sbulage joined #gluster
13:13 haomaiwang joined #gluster
13:26 kdhananjay joined #gluster
13:38 ahino joined #gluster
13:42 Philambdo joined #gluster
13:46 Wizek_ joined #gluster
13:47 buvanesh_kumar joined #gluster
13:49 victori joined #gluster
13:55 jwd joined #gluster
14:10 RameshN joined #gluster
14:11 nh2_ joined #gluster
14:12 Philambdo joined #gluster
14:13 haomaiwang joined #gluster
14:26 skylar joined #gluster
14:31 bbooth joined #gluster
14:36 mhulsman joined #gluster
14:42 victori joined #gluster
14:42 farhorizon joined #gluster
14:44 bbooth joined #gluster
14:44 bowhunter joined #gluster
14:52 hchiramm_ joined #gluster
14:59 marbu joined #gluster
15:10 shyam joined #gluster
15:12 kpease joined #gluster
15:12 vbellur joined #gluster
15:13 kpease_ joined #gluster
15:16 jdossey joined #gluster
15:20 bbooth joined #gluster
15:22 mhulsman joined #gluster
15:25 bbooth joined #gluster
15:25 Gambit15 joined #gluster
15:29 wushudoin joined #gluster
15:31 shyam joined #gluster
15:31 rwheeler joined #gluster
15:34 prasanth joined #gluster
15:36 marbu joined #gluster
15:39 shutupsquare joined #gluster
15:49 victori joined #gluster
16:03 ira joined #gluster
16:03 marbu joined #gluster
16:23 vbellur joined #gluster
16:23 victori joined #gluster
16:27 nirokato joined #gluster
16:29 bbooth joined #gluster
16:30 musa22 joined #gluster
16:32 PatNarciso Anyone in the room use their gluster for video editing?   I'd appreciate a chat regarding your setup... and how you satisfy Mac clients :\
16:33 shyam joined #gluster
16:47 marbu joined #gluster
16:51 JoeJulian PatNarciso: I know a lot of people that do, but they have tools they developed in-house that run on Linux. No mac needs.
16:52 victori joined #gluster
16:55 JoeJulian I work across the hall from these guys now: https://goo.gl/photos/3j2iN41yBqrE7D6TA
16:55 susant joined #gluster
16:55 mb_ joined #gluster
16:58 jwaibel joined #gluster
16:58 jdossey joined #gluster
17:17 nirokato joined #gluster
17:17 social joined #gluster
17:19 bbooth joined #gluster
17:20 nirokato joined #gluster
17:23 nirokato joined #gluster
17:27 social joined #gluster
17:32 alezzandro joined #gluster
17:34 farhorizon joined #gluster
17:38 bowhunter joined #gluster
17:42 jediburaniju joined #gluster
17:44 farhorizon joined #gluster
17:45 marbu joined #gluster
17:47 riyas joined #gluster
17:48 bbooth joined #gluster
17:54 msvbhat joined #gluster
17:54 bbooth joined #gluster
17:55 sanoj joined #gluster
17:57 Jacob843 joined #gluster
18:01 ahino joined #gluster
18:03 bbooth joined #gluster
18:05 ivan_rossi left #gluster
18:15 gyadav joined #gluster
18:15 JoeJulian TFJensen: "lost async page write" is, indeed, a kernel error. https://github.com/torvalds/linux/blob/62f8c40592172a9c3bc2658e63e6e76ba00b3b45/fs/buffer.c#L354
18:15 glusterbot Title: linux/buffer.c at 62f8c40592172a9c3bc2658e63e6e76ba00b3b45 · torvalds/linux · GitHub (at github.com)
18:17 PatNarciso JoeJulian - good neighborhood to be in.  My editors are mac + adobe (premiere).  I really should document my full setup... at this point, I think I want to optimize our samba conf on glusterfuse.  I'm also fed up with Mac*... everything from ._ resource files, to 1gb ethernet limitations unless I get a ~$500 10g thunderbolt adapter...
18:17 JoeJulian I got fed up with Mac in 1989.
18:19 PatNarciso I'm open minded to crazy ideas at this point - including running a VM on the mac, with a linux/gluster client, bumping up the client cache and exporting their mount smb... it's silly.
18:19 JoeJulian I saw somebody exporting a gluster volume to appletalk once a long time ago.
18:20 PatNarciso appletalk = afp, right?
18:20 JoeJulian But if I had to do it, I'd probably just use ganesha nfs.
18:20 JoeJulian maybe, it's been a long time and I really didn't care. ;)
18:21 PatNarciso heh
18:21 vbellur joined #gluster
18:21 JoeJulian It might have been post-perlgeek if you want to search for it in the channel logs.
18:22 ahino joined #gluster
18:22 shyam joined #gluster
18:22 JoeJulian Not sure if botbot goes back far enough.
18:25 * PatNarciso adds this to the things to research later today
18:26 JoeJulian I avoid SMB. The locks protocols always seem to get f'ed up and slow everything to a crawl. I don't know if that's samba's fault or the protocol. Had good luck with nfs though, even on the occasional Mac.
18:27 snehring had pretty good luck recently with smb3.0
18:28 PatNarciso snehring, samba is up to 4.x now, right?  any reason why sticking with 3.0?  (I also maybe confusing samba protocol with samba software version)
18:28 snehring sorry, not the samba version the smb protocol version
18:29 snehring iirc the cifs client (on linux) defaults to 1.0 which sucks
18:29 snehring not sure about the behavior on the mac side
18:29 JoeJulian Well CIFS is 1.0. ;)
18:29 snehring true
18:30 snehring I _think_ 3.0 also adds transport encryption
18:31 PatNarciso wow -- did that cause a bunch of overhead.
18:31 snehring have yet to get a real good test with the system under load
18:31 snehring but we've been getting surprisingly good performance
18:32 bbooth joined #gluster
18:32 PatNarciso When I found the gem to disable mac-client smb encryption... my video editors got a large jump in transcoding performance.
18:33 snehring was expecting smb performance to be really poor but with a 10G connected client we saw ~700MB/s 4k sequential reads and...
18:34 snehring less impressive writes
18:34 snehring like 8MB/s
18:34 snehring (which out performs our existing emc setup)
18:34 snehring 4k writes^
18:35 PatNarciso ... my raid6 will push 700MB local with light load.  :\
18:36 snehring yeah local's always going to be better
18:36 snehring esp with respect to latency
18:36 JoeJulian Apples vs Orchards
18:36 snehring this volume's also a distributed-disperese jbod affair
18:37 snehring 216 bricks
18:38 PatNarciso are you running the smb server on a gluster server node?
18:38 snehring all of them actually with ctdb
18:39 snehring with a dns round robin in front of it
18:39 PatNarciso digg it.
18:39 shutupsquare joined #gluster
18:40 snehring actually kind of kicking myself for not making the cluster private network infiniband since I probably could have gotten away with it from a funding point of view
18:41 PatNarciso my funding is... (looks in the couch)
18:41 snehring same generally, was able to get central buy in on this project (university)
18:41 PatNarciso our smb server is on a VM, over a gluster fuse.  and maybe be part of the reason why our SMB rates are so low.
18:42 snehring maybe
18:45 PatNarciso snehring, what os are your nodes on?
18:45 PatNarciso We're ubuntu -- but some days I wonder if cent/rh would be more ideal for gluster.
18:50 ahino joined #gluster
18:50 PatNarciso in reagards to the little things: performance profile/settings; at one point I noticed the mkfs.xfs defaults were different between cent and ubuntu (which was surprising to me).
18:52 mhulsman joined #gluster
18:54 MikeLupe JoeJulian: ping
18:54 glusterbot MikeLupe: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
18:55 victori joined #gluster
18:55 snehring PatNarciso: RHEL7
18:57 snehring I don't really know if you'd see a huge difference between distros, I (and my workplace) just have a preference toward rhel and derivatives
18:57 MikeLupe JoeJulian: that "naked ping" article is crazy
18:58 JoeJulian Hehe, but true.
18:58 MikeLupe ;)
18:58 jwd joined #gluster
18:59 JoeJulian Nothing worse than coming back from lunch to see a ping with no context and have the person who pinged be gone.
18:59 MikeLupe I'm here
18:59 MikeLupe ;)
18:59 JoeJulian I
18:59 MikeLupe Maybe you can hint me again
18:59 JoeJulian I'm not sure if I'm here.
19:00 MikeLupe True, you haven't ponged
19:00 MikeLupe "gluster volume heal engine info healed"
19:00 MikeLupe gave: "Gathering list of healed entries on volume engine has been unsuccessful on bricks that are down. Please check if all brick processes are running."
19:00 ic0n_ joined #gluster
19:00 MikeLupe Is that normal on working, functional volumes?
19:01 JoeJulian That means that some glustershd didn't respond.
19:01 JoeJulian It seems to have become more common with recent versions.
19:01 MikeLupe argh - volume status, peer status etc is ok, but something's still wrong?
19:02 farhorizon joined #gluster
19:02 jdossey joined #gluster
19:02 MikeLupe "gluster volume heal engine info split-brain" - no entries
19:07 mb_ joined #gluster
19:07 PatNarciso What are your fellas first reactions of a single xfs brick, on lvm, on raid6 of 24x8TB.  Too large?  No big deal?  Make it bigger?  Any horror stories?
19:08 mlhess- joined #gluster
19:09 JoeJulian If your raid breaks, it takes a week to get healthy again.
19:09 JoeJulian Personally, I would only use raid for performance purposes and I use replication for fault tolerance.
19:11 PatNarciso replica 2? (I gotta read more into how to quorum with just 2)
19:11 JoeJulian I always do replica 3, but 2+arbiter is better than 2.
19:12 PatNarciso and, in your opinion, how many drives (spinning) is too many for one node?
19:17 * PatNarciso has plenty of thoughts on this... I'm attracted to 45-drive-like solutions, but become concerned -- as the probability for something to break becomes higher.
19:17 JoeJulian That's a tough question. It depends on how many total servers you have. My preference is to have no more than 10% of my storage be on any one replica subvolume. There are several different math models that can be applied to that question, some of them showing the likelihood of losing any one file is so statistically insignificant as to be 0.
19:18 JoeJulian Don't forget, the more dense storage is actually best suited to cold storage. Too many disks seeking in tight configurations can cause vibrations that cause head misses.
19:19 JoeJulian https://www.youtube.com/watch?v=tDacjrSCeq4
19:20 * PatNarciso is watching.
19:20 snehring is this the sun one?
19:20 snehring yeah!
19:20 PatNarciso no shit...
19:22 JoeJulian So the facebooks of the world can do super dense storage because they'll actually spin-down disks until they need to preload a memcache somewhere.
19:22 JoeJulian Then, everything is stored in big huge sequential files which keeps seeking to a minimum.
19:24 PatNarciso so datacenters near train tracks... not ideal.  k.
19:26 snehring lol
19:26 snehring ours is :D
19:28 MikeLupe JoeJulian: I know my setup's tiny and "cute", but you got some more hints?
19:29 PatNarciso XFS vs ZFS.  XFS has never failed me.  ZFS has some attractive features, but I get this little feeling that... if/when ZFS does f'up: its gonna be in a big way.     I continue to favor XFS... but really need to find a way to reduce spinning disk Raid6 IO.
19:29 JoeJulian The setup that I was using while I became the "expert" I am today was three servers with 4 disks each, replica 3, holding about 300GB (yes, GB) of data.
19:29 MikeLupe Ok ok , I'll let the adults talk
19:30 JoeJulian So like the saying goes, it's not the size it's how you use it.
19:30 MikeLupe oky :)
19:30 JoeJulian brb.
19:30 MikeLupe hehe
19:32 bbooth joined #gluster
19:32 snehring PatNarciso, I have a small gluster setup running on zfs
19:33 snehring in an hpc context
19:34 snehring I have yet to have zfs explode in a major way
19:34 snehring not saying it's infallible though
19:34 PatNarciso snehring, whats your setup like?  and how long have you been workingn with ZFS?
19:35 snehring I've been using zfs (on linux) in a few ways for about 5 years now
19:35 snehring this specific setup is a replica 2 (have two storage nodes for this cluster) with a zpool on each node
19:35 snehring zpool config is...
19:35 skylar I've never used gluster on ZFS, but for a while we had close to 1PB on ZFS, mainly as a NFS backend
19:36 skylar we're moving off of it, but just so we can retire solaris
19:36 PatNarciso skylar, whoah.  any ZFS horror stories?
19:36 ahino joined #gluster
19:36 snehring span of raidz1's looks like
19:36 skylar PatNarciso - nope, only data-loss problems we had were from faulty firmware/hardware
19:37 skylar and ZFS itself behaved far better in the face of hardware problems than other filesystems I've used
19:37 snehring I got burned with a personal nas that was running xfs on top of a mdadm raid5 with silent corruption from a bad disk
19:38 PatNarciso skylar, how many drives were needed to make 1PB on a single ZFS?
19:38 shyam joined #gluster
19:38 skylar at the time we had mostly 1TB drives, so at least 1000
19:38 PatNarciso snehring, noooo... I dont wana hear that.   How did you learn of the silent corruption?
19:39 jdossey joined #gluster
19:39 snehring when music files started having inexplicable pops and chrips in em
19:39 skylar our workload is very write-heavy, so we're standardizing on GPFS right now
19:39 msvbhat joined #gluster
19:40 PatNarciso snehring, so -- the normal md checks didn't find this?
19:40 vbellur joined #gluster
19:40 snehring nope
19:41 snehring smart didn't even really seem to show anything meaningful
19:41 skylar ZFS would catch that, assuming the drive isn't also lying about whether it's actually committing data to platter
19:41 snehring it wasn't until I started trying to diagnose a performance problem (by offlining the disks one at a time and comparing) that I noticed one disk being very bad
19:41 snehring yeah zfs would
19:41 snehring 's why I adopted it
19:41 snehring also inline compression is really nice
19:42 snehring specifically with lz4
19:42 skylar yup, we loved it too
19:43 skylar I did hear horror stories about ZFS dedupe, but the compression seemed solid
19:43 snehring yeah dedup can lead to some issues
19:43 snehring we've got an all flash zfs storage thing that some of our VMs run on and that's worked okay
19:44 snehring kinda avoid the performance penalty with dedup by cheating with flash
19:47 * skylar is jealous - we're just starting to deploy flash, and just as a metadata tier
19:48 snehring I wouldn't be too jealous, it's a 2U supermicro box jammed full of samsung consumer sata ssds
19:48 snehring enterprise it is not
19:48 snehring relatively cheap though
19:49 PatNarciso if I were to setup my first metadata tier, it would be the same...
19:49 skylar our researchers want capacity over performance when they buy stuff, and then complain later when the performance is lacking
19:50 snehring sounds familiar
19:51 PatNarciso snehring, im a bit ignorant with ZFS.  is adding the metadata cache tier painful?  can it be added to the existing setup with ease?
19:51 bowhunter joined #gluster
19:51 plarsen joined #gluster
19:52 snehring I think you can add l2arc on later
19:52 * PatNarciso was considering a lvm metadata ssd for his xfs mounts... but skipped the lvm step at setup and put xfs as the first partition of his raid6's.  *sigh*
19:52 skylar yep, you can, though we just researched that option w/o implementing it
19:54 PatNarciso any fears of the ssd fails?  whats the worse case here?  (I think this is often where I stop exploring zfs as a possibility)
19:54 PatNarciso s/of/if
19:54 snehring with l2arc I don't think there's any danger (since that's just cache) with log devices it could be problematic depending on when the failure happened
19:55 skylar no, the only risk would be that you lose performance if the device fails
19:56 skylar unlike the ZIL, where you actually can lose data if you lose the ZIL device and then have a system crash
19:57 * PatNarciso is considering zfs.
19:58 snehring I only have good things to say, but when it breaks it breaks in ways that might be difficult to recover from
19:58 snehring I don't have concrete examples, just stuff I've seen from chatter on mailing lists and irc
19:59 snehring as with anything having a backup is important
19:59 bbooth joined #gluster
20:01 snehring I should specify, by 'breaks' I mean like zpool imports fail for no apparent reason for example
20:01 snehring not like 'common failures' like needing to replace a drive
20:02 skylar the worst problem we had was caused by a solaris bug causing the wrong drive's error light to come on
20:03 PatNarciso heh - so, you addressed the wrong disk?
20:03 skylar yup
20:03 PatNarciso doh.
20:03 skylar which then caused a double drive failure (we were just running RAIDZ1)
20:03 farhorizon joined #gluster
20:04 skylar like snehring said, have backups and you'll be fine. all storage systems have problems at some point or another
20:04 snehring on the topic of backups zfs sends and receives make that really easy
20:04 JoeJulian Backups are not always feasible.
20:05 skylar true, though if the data are important enough, you make them feasible
20:06 JoeJulian When you have 25PB of critical data, you're still not going to back it up.
20:06 skylar we have ~10PB of disk, with 12PB in backups
20:07 JoeJulian That's a lot of bandwidth.
20:07 skylar it is, but the value of the data make it worth it
20:08 JoeJulian But I stand corrected. I know plenty of business/science agencies/orgs that cannot. The cost and availability of the bandwidth to get the data off-site is impossible.
20:08 skylar our backups live on tape, so we live by the bandwidth of the Iron Mountain truck :)
20:09 JoeJulian tape?!?! Wow.
20:09 PatNarciso JoeJulian, I'm reading some of your messages from October.  I assume you're favoring xfs > zfs?
20:09 skylar JoeJulian - large-scale tape is much cheaper than disk (part of how we make these backups feasible)
20:09 JoeJulian I'm not a zfs fan. I feel it needs way too many resources for what you get.
20:09 hchiramm_ joined #gluster
20:10 snehring it can be memory hungry
20:10 JoeJulian How do you validate your tape archives?
20:10 JoeJulian The problem I always had with tape was it being unreadable when you actually needed it.
20:11 skylar our backup software (TSM) does in-line checksum validation
20:11 skylar and we do a media refresh every few years
20:11 MikeLupe JoeJulian: can you hint me some direction if "gluster volume heal engine info" showed no entries? thx
20:11 JoeJulian No entries should mean it's done.
20:11 PatNarciso Our offsite backup is a rolling cart full of USB disks linked into a USB hub.   We bring it into the office, mount each disk into a distributed gluster volume and watch rsync run for days...  It's MUCH less expensive than tape.  and sometimes faster.
20:12 MikeLupe can't follow the same path we did 3 days ago
20:12 skylar at least as of LTO3, the drives themselves would do read-after-write and catch any serious defects right away
20:12 skylar we backup anywhere from 5TB to 50TB per day, which limits our options a bit
20:13 skylar and we've had restores of 700TB...
20:13 PatNarciso The thought of a 700TB restore just turned my stomach.
20:13 skylar mine too, but it finished after a month or so
20:14 skylar fortunately our data are important, but generally not time-sensitive
20:15 nh2_ joined #gluster
20:17 PatNarciso JoeJulian, understanding you favor xfs, whats your recommendation on increasing speed/reducing io of a 8TBx8-disk Raid6?  If I had to guess, adding lvm cache?
20:17 msvbhat joined #gluster
20:17 bbooth joined #gluster
20:19 JoeJulian For writes, you could put the journal on NVMe.
20:20 Seth_Karlo joined #gluster
20:20 JoeJulian Personally, I just like to engineer for the use case.
20:24 PatNarciso my direct use case: video editing, no archival.  'active' projects may ref video from years ago, or video ingested 10 mins ago.   xfs/gluster is the storage home for video (large worm), and project files (small, manyyy writes).   also, encoding video == randomio like no other.
20:26 jdossey joined #gluster
20:36 PatNarciso I was considering gluster tiering for a while.  I need a better way of identifying what should be in the tier (maybe via xattr?).
20:37 JoeJulian Typically it's just what's being used. When it's no longer being used, it should go back to cold storage.
20:39 PatNarciso I agree with that; but that wasn't what happened.  I should review the promote/demote settings again.
20:40 ebbex joined #gluster
20:40 PatNarciso I also considered a homebrew overlay/aufs solution...  where there would be gluster-cold and gluster-hot volumes, and some process to rsync when idle, or during downtime.  That solution just felt wrong to me.
20:40 nh2_ joined #gluster
20:47 valkyr3e joined #gluster
20:47 Vapez joined #gluster
20:48 Vapez Hello
20:48 glusterbot Vapez: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
20:48 Vapez I just mount the glusterfs partition, on df -h it's show the usage but when i ls in the folder it doesn't show the files
20:48 Vapez Last time i fix it but i don't remember how..
20:49 Vapez any idea?
20:49 farhorizon joined #gluster
20:51 bowhunter joined #gluster
20:52 Vapez It's working now
20:52 Vapez Why is this delay?
20:52 Vapez JoeJulian: do you know?
20:53 JoeJulian No idea.
20:54 bbooth joined #gluster
20:55 JoeJulian Vapez: Check your client logs for clues.
20:59 deangiberson joined #gluster
21:02 armyriad joined #gluster
21:06 anoopcs joined #gluster
21:07 squizzi joined #gluster
21:07 armyriad joined #gluster
21:08 armyriad joined #gluster
21:11 shyam joined #gluster
21:14 farhorizon joined #gluster
21:20 msvbhat joined #gluster
21:21 vbellur joined #gluster
21:29 cacasmacas joined #gluster
21:51 Seth_Karlo joined #gluster
21:54 Seth_Karlo joined #gluster
21:56 bowhunter joined #gluster
22:03 derjohn_mobi joined #gluster
22:27 jdossey joined #gluster
22:32 farhorizon joined #gluster
22:33 vbellur joined #gluster
22:36 plarsen joined #gluster
22:39 farhorizon joined #gluster
22:55 Wizek_ joined #gluster
23:07 arpu joined #gluster
23:19 bbooth joined #gluster
23:28 squizzi joined #gluster
23:59 victori joined #gluster
23:59 farhoriz_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary