Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-02-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:45 BuBU29 joined #gluster
00:53 raghu joined #gluster
00:57 shdeng joined #gluster
01:29 shdeng joined #gluster
01:49 PatNarciso nh2_ yes, rolling rebalance is no fun.
01:51 PatNarciso mathatoms, I too have in my notes that there is a limit of 24 bricks per node.
01:52 PatNarciso if I recall correctly, it was RH docs that stated that, and to combine the drives (raid?) if needed to reduce brick count.
01:54 musa22 joined #gluster
01:55 mathatoms PatNarciso, I was hoping to find something more definitive in the gluster documentation
01:56 mathatoms that redhat document seemed old
01:56 raghu joined #gluster
01:56 mathatoms i found some blog posts of people using zfs to create a software raid to make larger bricks out of multiple drives
01:56 mathatoms which I may have to end up doing
02:00 mathatoms or maybe use LVM to combine the drives.  I don't know what to do yet
02:03 victori joined #gluster
02:06 susant joined #gluster
02:06 Gambit15 joined #gluster
02:07 PatNarciso mathatoms, I have yet to dive deep into ZFS.  I need to.  It's bitrot resolution functionality seems pretty slick.
02:07 PatNarciso kinda hard to leave xfs tho... xfs is like an old war buddy.  we've been in the shit together.
02:08 mathatoms haha
02:09 mathatoms i've never used zfs before myself, but it has quite a few features that seem interesting.  i'm looking forward to testing out the native lz4 compression.
02:10 mathatoms i've been using xfs on our current gluster instance and it has been working out great so far
02:10 mathatoms much better than that btrfs debacle
02:11 prasanth joined #gluster
02:12 PatNarciso before gluster, our first fileserver was on btrfs... and our video editors destroyed it.
02:12 PatNarciso destroyed == it became unusable multiple times a day.
02:14 mathatoms i believe it.  it seems like we would get to about 70% fs useage on btrfs and the kernel would start panicking and corrupt the partition
02:14 mathatoms never again
02:15 PatNarciso just curious, what linux distro are ya running?
02:16 mathatoms fedora
02:18 PatNarciso what kinda connectivity do ya have between your nodes?
02:19 PatNarciso bonded 1gX4 here... I'm questioning if its causing issues with my distributed setup.  its stable, however response time is poor.
02:21 mathatoms we only have bonded 1gX2
02:22 mathatoms using lacp
02:23 mathatoms its stable, the response time isn't great, but its mostly archival storage.  its fast enough
02:24 derjohn_mob joined #gluster
02:27 PatNarciso lacp here also.  hmm.  I should try totally disabling it.  just to see if response is better.
02:28 mathatoms we ran for a few weeks without lacp.  things improved a lot once we got that working
02:31 * PatNarciso is considering 10g -- just having trouble making that purchase w/o knowing for sure networking is the bottleneck.
02:32 PatNarciso adding an ssd hot tier, slows things down.  so, I'm pretty much thinking that networking is the bottleneck.  every ms counts with a ls.
02:46 nirokato_ joined #gluster
02:48 sanoj joined #gluster
02:52 bbooth joined #gluster
03:11 vbellur joined #gluster
03:11 susant joined #gluster
03:22 kramdoss_ joined #gluster
03:26 mb_ joined #gluster
03:35 magrawal joined #gluster
03:47 nbalacha joined #gluster
03:55 atinm joined #gluster
04:21 poornima_ joined #gluster
04:27 susant left #gluster
04:30 raghu joined #gluster
04:38 sbulage joined #gluster
04:40 gyadav joined #gluster
04:45 ndarshan joined #gluster
04:45 buvanesh_kumar joined #gluster
04:47 aravindavk joined #gluster
04:47 mb_ joined #gluster
04:48 apandey joined #gluster
04:49 itisravi joined #gluster
04:51 rafi joined #gluster
04:52 ppai joined #gluster
04:57 skoduri joined #gluster
04:58 skumar joined #gluster
05:00 sanoj joined #gluster
05:05 hgowtham joined #gluster
05:07 Karan joined #gluster
05:07 BitByteNybble110 joined #gluster
05:08 Prasad joined #gluster
05:16 RameshN joined #gluster
05:17 karthik_us joined #gluster
05:19 nishanth joined #gluster
05:19 Karan joined #gluster
05:20 ankit_ joined #gluster
05:41 msvbhat joined #gluster
05:42 rjoseph joined #gluster
05:48 apandey joined #gluster
05:49 rafi joined #gluster
05:51 rastar joined #gluster
05:52 riyas joined #gluster
05:52 [diablo] joined #gluster
05:59 k4n0 joined #gluster
06:07 nthomas joined #gluster
06:08 kdhananjay joined #gluster
06:12 Saravanakmr joined #gluster
06:16 Philambdo joined #gluster
06:17 R0ok_ joined #gluster
06:19 scubacuda joined #gluster
06:24 ahino joined #gluster
06:37 sbulage joined #gluster
06:42 rafi1 joined #gluster
06:44 kotreshhr joined #gluster
06:49 bbooth joined #gluster
07:03 Wizek_ joined #gluster
07:07 msvbhat joined #gluster
07:10 jwd joined #gluster
07:12 jiffin joined #gluster
07:12 Philambdo1 joined #gluster
07:19 k4n0 joined #gluster
07:24 jkroon joined #gluster
07:30 Wizek__ joined #gluster
07:33 jtux joined #gluster
07:46 Philambdo joined #gluster
08:00 nthomas joined #gluster
08:12 susant joined #gluster
08:21 jri joined #gluster
08:25 Bardack joined #gluster
08:28 Ulrar joined #gluster
08:32 ivan_rossi joined #gluster
08:37 sanoj joined #gluster
08:41 mhulsman joined #gluster
08:46 jiffin joined #gluster
08:46 sanoj joined #gluster
08:47 apandey joined #gluster
08:47 Wizek__ joined #gluster
08:48 musa22 joined #gluster
08:52 musa22 joined #gluster
08:58 derjohn_mob joined #gluster
08:59 fsimonce joined #gluster
09:04 overclk joined #gluster
09:07 pulli joined #gluster
09:10 skarlso joined #gluster
09:10 skarlso hi folks
09:19 kramdoss_ joined #gluster
09:21 jeffspeff joined #gluster
09:23 Wizek__ joined #gluster
09:27 ShwethaHP joined #gluster
09:29 pulli joined #gluster
09:36 Teraii_ joined #gluster
09:38 yosafbridge` joined #gluster
09:40 mrErikss1n joined #gluster
09:40 Plam_ joined #gluster
09:40 akay joined #gluster
09:40 Bardack_ joined #gluster
09:40 Anarka_ joined #gluster
09:40 samppah_ joined #gluster
09:40 squeakyneb_ joined #gluster
09:41 decay_ joined #gluster
09:41 rofl_____ joined #gluster
09:41 ketarax_ joined #gluster
09:42 akay Hi, I've just created a volume that I'm trying to delete, but gluster delete <vol> is giving the error "volume delete: gv3: failed: Some of the peers are down" even though all the peers are up - how do I delete it?
09:43 akay Or alternatively can I change from a Distributed volume to a replica 2 volume?
09:43 ppai joined #gluster
09:43 bitchecker_ joined #gluster
09:44 Wizek__ joined #gluster
09:44 Nebraskka_ joined #gluster
09:44 lucasrolff_ joined #gluster
09:45 samikshan_ joined #gluster
09:46 ahino joined #gluster
09:51 ShwethaHP joined #gluster
09:53 rastar joined #gluster
09:53 XpineX joined #gluster
09:53 kdhananjay joined #gluster
10:09 Wizek__ joined #gluster
10:14 derjohn_mob joined #gluster
10:26 pulli joined #gluster
10:27 Gambit15 joined #gluster
10:36 kdhananjay joined #gluster
10:46 tallmocha joined #gluster
10:52 Wizek__ joined #gluster
11:01 kramdoss_ joined #gluster
11:05 jtux joined #gluster
11:09 Wizek__ joined #gluster
11:14 pulli joined #gluster
11:19 atinm joined #gluster
11:23 cloph Hi there - maybe someone can give a hint - we're using qemu/kvm on debian8, with our own qeumu built with gluster support. Our VMs are acting up i/o wise, namely iostat showing 100% utilization, while no i/o is actually happening. so load skyrockets and disk-writing stuff will fail/crawl with 1-2 writes/s..
11:23 cloph Does that ring a bell for anyone? (even when not running with the disk from gluster volume, but from local storage)?
11:39 k4n0 joined #gluster
11:42 Wizek_ joined #gluster
11:52 Shu6h3ndu joined #gluster
11:56 cloph hmm https://bugzilla.redhat.com/show_bug.cgi?id=1414242 that has no real description, but might be related?
11:56 glusterbot Bug 1414242: high, high, ---, bugs, ASSIGNED , [whql][virtio-block+glusterfs]"Disk Stress" and "Disk Verification" job always failed on win7-32/win2012/win2k8R2 guest
12:03 percevalbot joined #gluster
12:05 k4n0 joined #gluster
12:11 Gambit15 cloph, FWIW, never seen the same in our environment
12:12 kotreshhr left #gluster
12:17 msvbhat_ joined #gluster
12:19 sbulage joined #gluster
12:20 dspisla joined #gluster
12:20 dspisla Hello, does anybody know how to use the meta xlator? Should I write it to my volfile manually?
12:22 rastar dspisla: it should be enabled by default on new installations
12:22 rastar dspisla: which version of gluster?
12:22 dspisla 3.8.8
12:22 rastar dspisla: on any fuse mount point you should be able to "cd .meta" to see meta xlator data
12:23 dspisla Thank you I found it :-)
12:24 dspisla I did not see it via ls -a
12:24 rastar dspisla: we hide it
12:24 dspisla HaHaHa Allrigth you are clever
12:24 rastar dspisla: :), do explore and let us know your feedback for meta xlator
12:33 cloph oh, you hide from ls, but still can cd to it? interesting magic :-)
12:33 cloph (curious: what kind of info would be in that / what made dspisla wanna play with it?)
12:34 atinm joined #gluster
12:38 sbulage joined #gluster
12:41 msvbhat joined #gluster
12:42 dspisla @nigelb Hello, are you there?
12:46 k4n0 joined #gluster
12:53 ahino joined #gluster
12:59 kdhananjay joined #gluster
12:59 susant left #gluster
13:05 pulli joined #gluster
13:11 Wizek_ joined #gluster
13:12 pdrakeweb joined #gluster
13:13 buvanesh_kumar joined #gluster
13:15 msvbhat joined #gluster
13:18 loadtheacc joined #gluster
13:19 ndarshan joined #gluster
13:26 k4n0 joined #gluster
13:26 Karan joined #gluster
13:36 sbulage joined #gluster
13:39 Saravanakmr joined #gluster
13:43 unclemarc joined #gluster
13:44 ankit__ joined #gluster
13:47 nbalacha joined #gluster
13:48 ppai joined #gluster
13:50 flying joined #gluster
13:52 sbulage joined #gluster
14:04 rafi joined #gluster
14:04 pulli joined #gluster
14:08 vanshyr_ joined #gluster
14:09 msvbhat joined #gluster
14:10 vanshyr_ joined #gluster
14:12 ahino joined #gluster
14:13 squizzi joined #gluster
14:14 shyam joined #gluster
14:23 farhorizon joined #gluster
14:26 skylar joined #gluster
14:37 ahino joined #gluster
14:47 farhorizon joined #gluster
14:47 tallmocha joined #gluster
14:48 Saravanakmr joined #gluster
14:50 ashiq joined #gluster
14:55 mhulsman joined #gluster
14:55 farhorizon joined #gluster
15:00 rafi1 joined #gluster
15:03 susant joined #gluster
15:03 susant left #gluster
15:06 zoyvind_ joined #gluster
15:18 shyam joined #gluster
15:25 ahino joined #gluster
15:26 mhulsman joined #gluster
15:39 pulli joined #gluster
15:43 ivan_rossi left #gluster
15:49 sudoSamurai joined #gluster
15:54 shyam joined #gluster
15:57 plarsen joined #gluster
16:00 wushudoin joined #gluster
16:05 victori joined #gluster
16:07 skarlso joined #gluster
16:08 arpu joined #gluster
16:28 [diablo] joined #gluster
16:33 shaunm joined #gluster
16:34 msvbhat joined #gluster
16:36 social joined #gluster
16:36 tallmocha Hi, we have been getting "W [rpcsvc.c:270:rpcsvc_program_actor] 0-rpc-service: RPC program not available (req 1298437 330) for 1x.xx.xx.67:65533" messages periodically on our gluster machines. Anyone know what they mean?
16:37 tallmocha Right after we get "E [rpcsvc.c:565:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully"
16:41 jwd joined #gluster
16:41 farhorizon joined #gluster
16:47 skoduri joined #gluster
16:47 jdossey joined #gluster
16:50 msvbhat joined #gluster
17:03 plarsen joined #gluster
17:06 skarlso joined #gluster
17:07 Karan joined #gluster
17:15 farhorizon joined #gluster
17:15 Gambit15 Hmm...apologies, 4 line paste coming (not really worth fpaste)
17:15 Gambit15 [root@v0 ~]# gluster snapshot list data
17:15 Gambit15 data-bck_GMT-2017.02.07-14.30.28
17:15 Gambit15 [root@v0 ~]# mount -t glusterfs localhost:/snaps/data-bck_GMT-2017.02.07-14.30.28/data /mnt
17:15 Gambit15 Mount failed. Please check the log file for more details.
17:16 Gambit15 That, following: https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Snapshots/
17:16 glusterbot Title: Managing Snapshots - Gluster Docs (at gluster.readthedocs.io)
17:16 Gambit15 What am I doing wrong? How do I mount a snapshot?
17:16 musa22 joined #gluster
17:24 musa22 joined #gluster
17:32 Manikandan joined #gluster
17:35 pjrebollo joined #gluster
17:50 kshlm joined #gluster
17:50 jkroon joined #gluster
17:53 bbooth joined #gluster
18:10 ttkg joined #gluster
18:15 jkroon joined #gluster
18:29 jiffin joined #gluster
18:32 jri joined #gluster
18:41 jwd joined #gluster
18:42 Vapez joined #gluster
18:46 vbellur joined #gluster
18:47 Gambit15 Ugh..."snapshot clone: failed: One or more bricks are not running." volume status shows all of my bricks as online
18:48 Gambit15 ...and I'm not seeing any of the I/O issues I'd expect if one of the peers was offline
18:50 niknakpaddywak joined #gluster
18:58 bluenemo joined #gluster
19:20 tallmocha joined #gluster
19:21 f0rpaxe_ joined #gluster
19:33 scubacuda joined #gluster
19:39 jwd joined #gluster
19:48 jkroon joined #gluster
19:57 rastar joined #gluster
20:05 pulli joined #gluster
20:06 jdossey joined #gluster
20:07 AppStore joined #gluster
20:12 telius joined #gluster
20:18 mb_ joined #gluster
20:20 tallmocha joined #gluster
20:25 serg_k joined #gluster
20:30 rastar joined #gluster
20:36 farhoriz_ joined #gluster
20:42 ashiq joined #gluster
21:08 farhorizon joined #gluster
21:09 bbooth joined #gluster
21:09 farhorizon joined #gluster
21:14 bbooth joined #gluster
21:29 jdossey joined #gluster
21:55 derjohn_mob joined #gluster
22:22 ashiq joined #gluster
22:39 ShwethaHP joined #gluster
22:40 ShwethaHP left #gluster
23:00 farhorizon joined #gluster
23:09 musa22 joined #gluster
23:11 john51 joined #gluster
23:16 loadtheacc joined #gluster
23:27 jwd joined #gluster
23:40 john51 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary