Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:06 johnmilton joined #gluster
00:08 betheynyx joined #gluster
00:23 MikeLupe I just had the same problem like russoisraeli - did not remount, cleared everything. Now I have a problem with an additional volume added:
00:24 MikeLupe gluster volume create iso replica 3  $KVMHOST1:/gluster/iso/brick1 $KVMHOST2:/gluster/iso/brick1  $KVMHOST3:/gluster/iso/brick1
00:24 MikeLupe "volume create: iso: failed" ; "Setting extended attributes failed, reason: Structure needs cleaning."
00:25 MikeLupe How do I clean structure...without wipeing it all again?
00:25 MikeLupe oh damn
00:26 russoisraeli I ended up stopping the volume. Backing up the file from the underlining file system from each brick, removing the file from each brick, and then starting the volume again. Then the corrupted file disappeared, and I was able to copy the backup
00:27 MikeLupe It now just created it, without me changing anything..
00:27 JoeJulian "structure needs cleaning" can be an xfs error on the brick filesystem.
00:28 MikeLupe Yeah, lot of work. I'm lucky having no data on it (anymore, some Test-VMs)
00:28 MikeLupe JoeJulian: ok
00:32 MikeLupe I really turn in circles...now the new 3rd "iso" volume comes up, but the other two get errors..
00:32 MikeLupe "volume start: engine: failed: Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /gluster/engine/brick1. Reason : No data available"
00:33 MikeLupe When starting...
00:34 JoeJulian is part of the path /gluster/engine/brick1 supposed to be a mounted disk?
00:34 JoeJulian (besides the obvious / mount)
00:38 MikeLupe I just checked and saw fuse mount active, but only on "engine", not on my first "data"
00:39 MikeLupe unmounted, but still get the same error
00:44 MikeLupe I'm wiping..dell style
00:46 MikeLupe deleted volume, recreated with some volume options - first start said "volume start: engine: failed: Commit failed on localhost. Please check log file for details."
00:46 gdi2k JoeJulian, thanks, I'm not yet sure how SE Linux would impact things, but will look into it. any docs / guidance you could point me to?
00:46 MikeLupe Second try some seconds later again "volume start: engine: failed: Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /gluster/engine/brick1. Reason : No data available"
00:52 F2Knight joined #gluster
00:59 JoeJulian MikeLupe: there shouldn't be a fuse mount along that path. That's your brick.
00:59 JoeJulian Did you wipe your brick?
00:59 MikeLupe I'm trying
01:00 JoeJulian Ah, ok. Then you're probably getting the path or prefix error message.
01:00 JoeJulian @path or prefix
01:00 glusterbot JoeJulian: http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
01:00 JoeJulian er, no.
01:00 JoeJulian Oh, right. Forget that.
01:00 MikeLupe ...
01:00 JoeJulian If you wiped your brick, gluster will protect you from yourself by not starting a daemon on an unlicensed path.
01:01 MikeLupe hmm..
01:01 JoeJulian This is great if (in your example) /dev/sdb1 had not mounted on /gluster/engine during the boot.
01:01 haomaiwang joined #gluster
01:01 MikeLupe ok
01:01 JoeJulian It would suck to have all your data replicated to /gluster/engine/brick1 on the root filesystem.
01:02 JoeJulian But... since you did it on purpose, you can override that safety mechanism. "gluster volume start $volname force"
01:02 MikeLupe haha
01:03 MikeLupe ok, but now I hope being able even to create that volume...there's "/gluster/engine/brick1 is already part of a volume"
01:04 MikeLupe The bricks not shown anywhere..
01:04 MikeLupe The brick's not shown anywhere..
01:06 JoeJulian remove the "/gluster/engine/brick1" directory (probably on all your servers - assuming you're using the same path on all of them)(also assuming you're trying to start over with a completely empty volume)
01:06 MikeLupe I am
01:07 JoeJulian I'm just being careful with my advice. I'm tired. I've been working on a broken ceph installation all weekend.
01:07 MikeLupe great - "structure needs cleaning"... I really messed up, didn't I?
01:07 MikeLupe So am I, thanks anyway :)
01:07 JoeJulian check `dmesg | tail` on all your servers.
01:08 MikeLupe You worked on your split-brain test..?
01:08 MikeLupe wth - "] XFS (dm-2): metadata I/O error: block 0x2 ("xfs_trans_read_buf_map") error 117 numblks 1
01:08 MikeLupe [18527.061218] XFS (dm-2): xfs_do_force_shutdown(0x1) called from line 370 of file fs/xfs/xfs_trans_buf.c.  Retu"
01:09 MikeLupe you already mentioned potential filesystem problems...
01:10 MikeLupe Metadata corruption detected at xfs_agf_read_verify+0x70/0x120 [xfs], block 0x1
01:10 MikeLupe it's a mess
01:17 MikeLupe ??????????  ? ?    ?       ?            ? engine
01:28 swebb joined #gluster
01:31 azilian joined #gluster
01:38 MikeLupe good night
01:38 MikeLupe And thanks again
01:47 om joined #gluster
02:01 haomaiwang joined #gluster
02:06 level7 joined #gluster
02:23 DV_ joined #gluster
02:30 karnan joined #gluster
02:39 ramteid joined #gluster
03:01 haomaiwang joined #gluster
03:04 F2Knight joined #gluster
03:10 kshlm joined #gluster
03:16 swebb joined #gluster
03:27 nishanth joined #gluster
03:28 Pintomatic joined #gluster
03:28 twisted` joined #gluster
03:29 rideh joined #gluster
03:30 samikshan joined #gluster
03:38 DV_ joined #gluster
03:44 itisravi joined #gluster
03:49 RameshN_ joined #gluster
03:52 rafi joined #gluster
03:52 atinm joined #gluster
04:01 haomaiwang joined #gluster
04:09 nbalacha joined #gluster
04:09 overclk joined #gluster
04:10 gem joined #gluster
04:11 nbalacha joined #gluster
04:15 prasanth joined #gluster
04:16 ppai joined #gluster
04:19 shubhendu joined #gluster
04:24 aspandey joined #gluster
04:26 Manikandan joined #gluster
04:31 sakshi joined #gluster
04:50 Manikandan joined #gluster
04:54 F2Knight joined #gluster
04:54 kotreshhr joined #gluster
05:01 haomaiwang joined #gluster
05:12 ndarshan joined #gluster
05:14 Debloper joined #gluster
05:15 nehar joined #gluster
05:17 Apeksha joined #gluster
05:24 JesperA joined #gluster
05:29 kotreshhr joined #gluster
05:31 sakshi joined #gluster
05:32 betheynyx joined #gluster
05:33 poornimag joined #gluster
05:36 gowtham joined #gluster
05:36 karthik___ joined #gluster
05:36 Bhaskarakiran joined #gluster
05:40 jiffin joined #gluster
05:46 F2Knight joined #gluster
05:53 rastar joined #gluster
05:53 nishanth joined #gluster
05:56 aspandey joined #gluster
06:01 haomaiwang joined #gluster
06:07 natarej joined #gluster
06:07 jiffin joined #gluster
06:08 mbukatov joined #gluster
06:09 kaushal_ joined #gluster
06:12 mhulsman joined #gluster
06:13 ashiq joined #gluster
06:15 pur joined #gluster
06:15 hgowtham joined #gluster
06:18 atinm joined #gluster
06:20 Wizek joined #gluster
06:21 ramky joined #gluster
06:22 aspandey joined #gluster
06:22 jtux joined #gluster
06:24 kdhananjay joined #gluster
06:29 rastar joined #gluster
06:29 hchiramm joined #gluster
06:33 kshlm joined #gluster
06:39 atinm joined #gluster
06:45 anil joined #gluster
06:51 Wizek joined #gluster
06:52 ecoreply joined #gluster
06:52 [Enrico] joined #gluster
06:59 mowntan joined #gluster
07:01 haomaiwang joined #gluster
07:05 atinm joined #gluster
07:12 Lee1092 joined #gluster
07:15 Manikandan joined #gluster
07:15 mowntan joined #gluster
07:16 mowntan joined #gluster
07:24 kassav joined #gluster
07:29 fsimonce joined #gluster
07:32 arcolife joined #gluster
07:35 Saravanakmr joined #gluster
07:36 ivan_rossi joined #gluster
07:44 hgowtham joined #gluster
07:59 karthik___ joined #gluster
08:01 haomaiwang joined #gluster
08:03 atinm joined #gluster
08:06 sakshi joined #gluster
08:09 Slashman joined #gluster
08:14 delhage_ joined #gluster
08:16 [Enrico] joined #gluster
08:18 vmallika joined #gluster
08:20 DV_ joined #gluster
08:35 kdhananjay joined #gluster
08:35 aspandey joined #gluster
08:43 DV_ joined #gluster
08:48 mowntan joined #gluster
08:48 mowntan joined #gluster
08:52 RameshN_ joined #gluster
09:01 haomaiwang joined #gluster
09:03 jbrooks joined #gluster
09:08 level7 joined #gluster
09:10 ramky joined #gluster
09:17 kdhananjay joined #gluster
09:22 Bardack ahoi
09:22 Bardack due to a hardware issue, we've lost the connectivity to our storage domain
09:22 Bardack which makes all VMs paused (make sense)
09:23 Bardack everything is back now and running, and all VMs stay in paused mode
09:23 Bardack while ovirt see the storage domain up and working fine
09:23 Bardack isn't there any auto action that'll check and resume if available ?
09:23 Bardack and how can i manually resume otherwise ?
09:25 gem joined #gluster
09:27 Bardack well, i powered off + run, and everything is back
09:33 aspandey joined #gluster
09:34 itisravi joined #gluster
09:35 ramky joined #gluster
09:40 Bardack mmm, 4 of the 20 machines do not want to start (fail to run VM …)
09:40 Bardack i m not an ovirt expert (the guy is on holiday)
09:41 Bardack i ll check :p
09:47 RameshN_ joined #gluster
09:48 atinm joined #gluster
09:53 post-factum JoeJulian: any luck with inducing split-brain on replica 3 arbiter 1 volume?
09:55 DV__ joined #gluster
09:58 gem joined #gluster
09:58 Manikandan joined #gluster
10:01 haomaiwang joined #gluster
10:14 karnan joined #gluster
10:16 natarej joined #gluster
10:17 skoduri joined #gluster
10:23 arcolife joined #gluster
10:24 kovshenin joined #gluster
10:27 arcolife joined #gluster
10:27 betheynyx_ joined #gluster
10:27 arcolife joined #gluster
10:37 marbu joined #gluster
10:37 nbalacha joined #gluster
10:37 Saravanakmr joined #gluster
10:37 arcolife joined #gluster
10:37 Marbug joined #gluster
10:37 hagarth1 joined #gluster
10:37 edong23_ joined #gluster
10:37 arcolife joined #gluster
10:37 kotreshhr1 joined #gluster
10:37 kenansulayman joined #gluster
10:37 DV joined #gluster
10:38 poornimag joined #gluster
10:38 wnlx joined #gluster
10:38 aspandey joined #gluster
10:41 Telsin joined #gluster
10:46 robb_nl joined #gluster
10:50 kovsheni_ joined #gluster
10:55 wnlx joined #gluster
10:59 johnmilton joined #gluster
11:01 haomaiwang joined #gluster
11:01 kovshenin joined #gluster
11:08 kovshenin joined #gluster
11:19 caitnop joined #gluster
11:24 gem joined #gluster
11:28 kovshenin joined #gluster
11:29 ira_ joined #gluster
11:31 kovsheni_ joined #gluster
11:33 kovsheni_ joined #gluster
11:34 mowntan joined #gluster
11:39 atinm joined #gluster
11:47 overclk joined #gluster
11:48 ppai joined #gluster
11:51 plarsen joined #gluster
11:55 unclemarc joined #gluster
12:01 haomaiwang joined #gluster
12:01 russoisraeli joined #gluster
12:04 kovshenin joined #gluster
12:04 Marbug left #gluster
12:16 rastar joined #gluster
12:18 wnlx joined #gluster
12:27 kovsheni_ joined #gluster
12:33 luizcpg joined #gluster
12:33 ^ScottyUk^ joined #gluster
12:33 ^ScottyUk^ left #gluster
12:36 ppai joined #gluster
12:37 shaunm joined #gluster
12:37 johnmilton joined #gluster
12:37 arcolife joined #gluster
12:42 Manikandan joined #gluster
12:57 theron joined #gluster
12:58 kovshenin joined #gluster
13:01 haomaiwang joined #gluster
13:02 MikeLupe joined #gluster
13:07 gowtham_ joined #gluster
13:08 jiffin joined #gluster
13:16 kovshenin joined #gluster
13:17 nbalacha joined #gluster
13:19 julim joined #gluster
13:23 cuqa_ joined #gluster
13:28 mpietersen joined #gluster
13:28 Slashman joined #gluster
13:31 shyam joined #gluster
13:33 Todi joined #gluster
13:36 kotreshhr joined #gluster
13:38 ninkotech joined #gluster
13:38 ninkotech_ joined #gluster
13:46 EinstCrazy joined #gluster
13:49 [Enrico] joined #gluster
13:49 kovsheni_ joined #gluster
13:51 skylar joined #gluster
13:52 nneul joined #gluster
13:54 nneul Looking at doing an upgrade from 3.5.4 to 3.6.7. The docs on the website talks about taking a full outage, and upgrading clients. 1) Can this be done piecemeal (a server at a time over 2-3 hours), and 2) Is it safe to run the older clients against the new filesystem for a few weeks until they can be upgrade as well?
13:54 chirino_m joined #gluster
13:55 JoeJulian post-factum: nope. I had (have) a ceph problem all weekend.
13:56 post-factum wow
13:56 post-factum what was that?
13:56 kkeithley he was using Ceph, that was the problem.
13:56 * kkeithley ducks
13:57 JoeJulian It seems to be more of a LSI expander problem.
13:57 poornimag joined #gluster
13:59 post-factum kkeithley: nice sarcasm, but no :)
13:59 JoeJulian Bring up servers, ceph starts repairing, drives start dropping off. The damned "LTS" 3.13 kernel has an unfixed bug in xfs so the journal write continues forever. Even if I *can* get the server to reboot instead of hang, the Wiwynn Knox tray's LSI expander doesn't show up as a device unless it's power-cycled.
14:00 JoeJulian Which... of course... cannot be done remotely.
14:00 JoeJulian I and several people at the DC have had a shitty weekend.
14:01 JoeJulian Oh, so I called Wiwynn on Monday morning and it's a f'ing holiday.
14:01 JoeJulian Thanks Taiwan.
14:01 post-factum i believe, that was awesome experience for you, JoeJulian
14:01 haomaiwang joined #gluster
14:01 JoeJulian lol
14:01 post-factum :D
14:03 JoeJulian nneul: which doc on the web site? We've done rolling upgrades since 3.2. They work find most of the time.
14:03 JoeJulian fine
14:03 nneul http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6
14:03 JoeJulian I'm still too tired.
14:04 nneul it says recommended, just isn't clear how safe it is otherwise.
14:04 nneul In that case, cool! that will be much simpler.
14:05 bowhunter joined #gluster
14:05 JoeJulian Heh, nice. "a) Scheduling a downtime (Recommended)" with no "b) "
14:05 nneul On a related note - I'll be doing whole-server upgrades with the option of just having it recreate each brick if there is any benefit to changing backend storage. Is there any preferred filesystem at this point, or just stick with ext4?
14:05 nneul i.e. btrfs
14:06 ashiq joined #gluster
14:09 kovshenin joined #gluster
14:10 EinstCrazy joined #gluster
14:12 JoeJulian I wouldn't use btrfs with a kernel <4.2 imho, but ymmv, and I wouldn't use any cow filesystem where I want write performance.
14:14 nneul This would be on 4.4... mostly concerned with stability (it's a backend filesystem for an HA voip cluster). This sounds like a 'if it ain't broke...' situation.
14:15 nneul Thank you.
14:17 kotreshhr left #gluster
14:18 arcolife joined #gluster
14:19 kovshenin joined #gluster
14:31 kovshenin joined #gluster
14:31 Slashman joined #gluster
14:32 EinstCrazy joined #gluster
14:34 kovsheni_ joined #gluster
14:51 ctria joined #gluster
14:55 kovshenin joined #gluster
15:01 haomaiwang joined #gluster
15:04 shaunm joined #gluster
15:08 hchiramm joined #gluster
15:09 shubhendu joined #gluster
15:09 bowhunter joined #gluster
15:12 yosafbridge` joined #gluster
15:12 lezo_ joined #gluster
15:13 dblack joined #gluster
15:14 ic0n_ joined #gluster
15:15 twisted`_ joined #gluster
15:15 ic0n_ joined #gluster
15:15 tyler274_ joined #gluster
15:16 Logos01_ joined #gluster
15:16 PotatoGim_ joined #gluster
15:20 shyam joined #gluster
15:20 betheynyx joined #gluster
15:21 marlinc_ joined #gluster
15:21 nehar joined #gluster
15:21 Pintomatic joined #gluster
15:21 tbm_ joined #gluster
15:21 shortdudey123 joined #gluster
15:21 Trefex joined #gluster
15:22 shaunm joined #gluster
15:22 papamoose joined #gluster
15:22 robb_nl joined #gluster
15:22 kalzz joined #gluster
15:23 samikshan joined #gluster
15:24 cyberbootje joined #gluster
15:26 theron joined #gluster
15:26 edong23 joined #gluster
15:27 Pintomatic joined #gluster
15:27 ahino joined #gluster
15:27 theron joined #gluster
15:28 russoisraeli joined #gluster
15:29 level7 joined #gluster
15:29 XpineX joined #gluster
15:31 Chr1st1an joined #gluster
15:32 level7 joined #gluster
15:41 dlambrig_ joined #gluster
15:42 dlambrig_ left #gluster
15:46 The_Pugilist joined #gluster
15:46 wushudoin joined #gluster
15:49 Lee1092 joined #gluster
15:52 ppai joined #gluster
15:55 bennyturns joined #gluster
16:01 haomaiwang joined #gluster
16:03 ivan_rossi left #gluster
16:08 Logos01_ left #gluster
16:16 Marbug joined #gluster
16:23 amye joined #gluster
16:23 level7 joined #gluster
16:26 theron joined #gluster
16:29 overclk joined #gluster
16:29 kovsheni_ joined #gluster
16:30 F2Knight joined #gluster
16:37 Slashman joined #gluster
16:43 theron joined #gluster
16:47 ashiq joined #gluster
16:48 Bhaskarakiran joined #gluster
16:49 kpease joined #gluster
16:51 _nex_ joined #gluster
16:52 florian joined #gluster
16:52 shyam joined #gluster
16:53 florian joined #gluster
16:55 Manikandan joined #gluster
17:01 haomaiwang joined #gluster
17:01 lord4163 joined #gluster
17:03 theron joined #gluster
17:04 fubada joined #gluster
17:05 fubada hi. when setting up geo-rep, is it necessary to push-pem to all remote cluster nodes?
17:05 fubada or will remote slaves be automatically discovered
17:08 mhulsman joined #gluster
17:10 gem joined #gluster
17:12 shyam1 joined #gluster
17:14 bennyturns joined #gluster
17:31 overclk fubada: push-pem should take care of pushing to all nodes in the remote cluster.
17:35 theron joined #gluster
17:39 mpietersen joined #gluster
17:43 johnmilton joined #gluster
17:43 jiffin joined #gluster
17:47 siel joined #gluster
17:52 julim joined #gluster
17:56 Logos01 joined #gluster
17:57 Logos01 Howdy, folks. So I have an odd case here. I'm attempting to test writespeeds on a replica-3 volume by use of dd if=/dev/zero bs=1M count=100 of=/path/to/glusterfs/fusemount/zero.img
17:58 Logos01 Trouble is, whenever I do so the dd process goes to uninterruptible sleep (state D in ps) and requires the server to be rebooted to clear it out.
17:58 Logos01 This ... seems very wrong.
17:59 Logos01 Same thing if I just copy files in.
18:01 haomaiwang joined #gluster
18:04 theron joined #gluster
18:10 Siavash joined #gluster
18:10 Siavash joined #gluster
18:11 theron joined #gluster
18:12 nathwill joined #gluster
18:16 robb_nl joined #gluster
18:19 mhulsman joined #gluster
18:24 luizcpg_ joined #gluster
18:24 plarsen joined #gluster
18:25 gem joined #gluster
18:33 bennyturns joined #gluster
18:51 theron joined #gluster
18:57 mhulsman joined #gluster
18:57 muneerse2 joined #gluster
19:01 haomaiwang joined #gluster
19:05 nishanth joined #gluster
19:06 amye joined #gluster
19:15 level7 joined #gluster
19:18 ashiq joined #gluster
19:24 chirino joined #gluster
19:36 Siavash joined #gluster
19:36 Siavash joined #gluster
19:39 theron_ joined #gluster
19:42 plarsen joined #gluster
19:53 johnmilton joined #gluster
19:56 robb_nl joined #gluster
19:56 ecoreply_ joined #gluster
19:57 brettnem joined #gluster
19:58 brettnem hey all. I'm using cluster to back a realtime application that reads small wav files. Without gluster, the files play great. With gluster there is like a 10 second pause before the files play. Like if there is some huge aceess time. Any ideas?
20:01 haomaiwang joined #gluster
20:06 brettnem any ideas?
20:07 level7 joined #gluster
20:09 MikeLupe Hi brettnem - maybe you should spell "gluster" ;)
20:11 level7 joined #gluster
20:11 jefbrown joined #gluster
20:12 kovshenin joined #gluster
20:15 kovsheni_ joined #gluster
20:20 kovshenin joined #gluster
20:23 sc0_ joined #gluster
20:24 shaunm joined #gluster
20:25 level7 joined #gluster
20:28 kovshenin joined #gluster
20:28 cogsu joined #gluster
20:28 crashmag_ joined #gluster
20:28 semiosis_ joined #gluster
20:29 atrius_ joined #gluster
20:29 semiosis joined #gluster
20:30 jockek joined #gluster
20:38 robb_nl joined #gluster
20:40 virusuy joined #gluster
20:40 kovshenin joined #gluster
20:44 kovshenin joined #gluster
21:00 kovsheni_ joined #gluster
21:01 haomaiwang joined #gluster
21:02 mhulsman joined #gluster
21:03 kovshenin joined #gluster
21:08 Wizek_ joined #gluster
21:11 kovsheni_ joined #gluster
21:12 brettnem joined #gluster
21:15 MugginsM joined #gluster
21:22 kovshenin joined #gluster
21:27 kovshenin joined #gluster
21:30 kovshenin joined #gluster
21:35 bio_ joined #gluster
21:36 kovsheni_ joined #gluster
21:45 bennyturns joined #gluster
21:54 amye joined #gluster
22:01 haomaiwang joined #gluster
22:01 bennyturns joined #gluster
22:04 MugginsM joined #gluster
22:07 DV joined #gluster
22:38 MugginsM joined #gluster
22:44 plarsen joined #gluster
22:55 shyam joined #gluster
22:57 betheynyx joined #gluster
22:59 jvandewege joined #gluster
23:01 ninkotech_ joined #gluster
23:01 haomaiwang joined #gluster
23:21 renout_away joined #gluster
23:27 MugginsM joined #gluster
23:44 muneerse joined #gluster
23:44 BogdanR joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary