Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-11-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 shortdudey123 joined #gluster
00:48 msvbhat joined #gluster
01:18 ^andrea^ joined #gluster
02:20 gyadav__ joined #gluster
02:20 gyadav joined #gluster
02:55 msvbhat joined #gluster
02:56 ilbot3 joined #gluster
02:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:12 ^andrea^ joined #gluster
03:24 shyu joined #gluster
04:11 itisravi joined #gluster
04:28 karthik_us joined #gluster
04:34 atinm joined #gluster
04:36 ahino joined #gluster
04:42 skumar joined #gluster
04:44 kramdoss__ joined #gluster
04:51 msvbhat joined #gluster
04:57 Shu6h3ndu joined #gluster
05:02 msvbhat joined #gluster
05:13 aravindavk joined #gluster
05:17 Prasad joined #gluster
05:18 poornima joined #gluster
05:22 sanoj joined #gluster
05:27 hgowtham joined #gluster
05:28 ndarshan joined #gluster
05:37 susant joined #gluster
05:37 poornima joined #gluster
05:48 psony joined #gluster
05:52 apandey joined #gluster
05:56 Shu6h3ndu joined #gluster
06:05 msvbhat joined #gluster
06:15 Prasad joined #gluster
06:16 poornima joined #gluster
06:16 Prasad joined #gluster
06:17 jiffin joined #gluster
06:19 atinm joined #gluster
06:20 xavih joined #gluster
06:20 kotreshhr joined #gluster
06:44 skoduri joined #gluster
06:44 Peppard joined #gluster
07:05 atinm joined #gluster
07:13 jtux joined #gluster
07:14 rastar joined #gluster
07:19 ppai joined #gluster
07:20 Prasad_ joined #gluster
07:21 ppai joined #gluster
07:21 Jeroendv joined #gluster
07:22 Prasad__ joined #gluster
07:24 _KaszpiR_ joined #gluster
07:37 poornima joined #gluster
07:38 jiffin joined #gluster
07:41 ThHirsch joined #gluster
07:42 Saravanakmr joined #gluster
07:45 atinm joined #gluster
07:47 fsimonce joined #gluster
07:55 rafi joined #gluster
07:57 int_0x21 joined #gluster
08:00 ivan_rossi joined #gluster
08:05 rastar joined #gluster
08:06 jkroon joined #gluster
08:34 itisravi joined #gluster
08:38 itisravi joined #gluster
08:41 jiffin1 joined #gluster
08:55 the-me joined #gluster
08:56 marbu joined #gluster
09:14 _KaszpiR_ joined #gluster
09:30 sunny joined #gluster
09:31 kdhananjay joined #gluster
09:34 kdhananjay1 joined #gluster
09:36 poornima_ joined #gluster
09:41 kdhananjay joined #gluster
09:56 buvanesh_kumar joined #gluster
10:17 kdhananjay joined #gluster
10:27 om2 lvm if you must.  xfs hell stay away
10:28 om2 I had a disaster due to xfs corruption
10:28 kdhananjay joined #gluster
10:29 om2 xfs_repair is often useless, unlike fsck that works much better
10:30 om2 anyway, a bit of topic, just 2 cents
10:30 om2 I would use partitions instead of lvm
10:31 om2 just depends on the complexity of your storage and how many bricks you need...
10:37 sunny joined #gluster
10:38 poornima_ joined #gluster
10:38 gyadav_ joined #gluster
10:40 gyadav joined #gluster
10:44 baber joined #gluster
10:47 Peppard joined #gluster
11:04 kdhananjay joined #gluster
11:15 gyadav__ joined #gluster
11:17 gyadav joined #gluster
11:23 skumar joined #gluster
11:28 plarsen joined #gluster
11:31 kdhananjay1 joined #gluster
11:39 atinm joined #gluster
11:50 Jeroendv Hi, I came here yesterday because of colleague of mine did a centos 6 -> 7 migreation (created new, single brick volume + rsynched everything). However, the new volume is broken.
11:51 Jeroendv Someone suggested to try mounting with nfs and comparing to native gluster mount, but that's a bit delayed to due necessary firewall changes to do that.
11:52 Jeroendv However, I've now enabled extra logging on a test mount, and when I ls a directory, I get this line for every file which should be in that directory, but doesn't show up:
11:52 Jeroendv [2017-11-03 11:01:34.041032] D [MSGID: 0] [dht-common.c:4999:dht_readdirp_cbk] 0-ams03testdata01-dht: Invalid stat, ignoring entry <name of the file> gfid 00000000-0000-0000-0000-000000000000 [Invalid argument]
11:53 Jeroendv From what I understand, that seems to indicate the DHT is ?broken? somehow? Anyone got any idea how I can fix that?
11:55 farhorizon joined #gluster
12:02 susant joined #gluster
12:07 skumar joined #gluster
12:14 map1541 joined #gluster
12:16 boutcheee520 joined #gluster
12:18 Saravanakmr joined #gluster
12:28 Klas that someone was me =P
12:28 Klas and, nope!
12:34 Jeroendv Ok, thanks anyway Klas!
12:37 phlogistonjohn joined #gluster
12:52 itisravi joined #gluster
12:52 msvbhat joined #gluster
12:54 DoubleJ joined #gluster
13:10 psony joined #gluster
13:23 baber joined #gluster
13:44 shyam joined #gluster
13:45 hmamtora joined #gluster
13:46 DoubleJ joined #gluster
13:50 phlogistonjohn joined #gluster
14:01 dgandhi1 joined #gluster
14:10 gyadav__ joined #gluster
14:11 gyadav joined #gluster
14:21 rastar joined #gluster
14:26 jbrooks joined #gluster
14:29 boutcheee520 joined #gluster
14:30 boutcheee520 In Gluster, there is not a "master" and "secondary" brick is there? I need to do some maintenance on them and am trying to figure out which brick would be best..
14:32 boutcheee520 I have the bricks set up to replicate between one another.
14:47 aravindavk joined #gluster
14:50 kotreshhr left #gluster
15:12 kpease joined #gluster
15:14 kpease_ joined #gluster
15:14 hmamtora you should be ok boutcheese520: for such maintenances
15:15 hmamtora What is the expected duration for such a maintenance and do u expect the gluster volume to be written continuously while the maintenance is being performed?
15:17 gyadav joined #gluster
15:17 gyadav__ joined #gluster
15:30 baber joined #gluster
15:33 map1541 joined #gluster
15:44 ppai joined #gluster
15:45 farhorizon joined #gluster
15:53 plarsen joined #gluster
16:10 buvanesh_kumar joined #gluster
16:11 rastar joined #gluster
16:17 msvbhat joined #gluster
16:22 farhorizon joined #gluster
16:27 kramdoss__ joined #gluster
16:29 map1541 joined #gluster
16:35 jkroon joined #gluster
16:47 ivan_rossi left #gluster
16:56 buvanesh_kumar joined #gluster
17:10 gyadav__ joined #gluster
17:10 gyadav joined #gluster
17:11 skumar joined #gluster
17:31 csaba joined #gluster
17:31 msvbhat joined #gluster
18:09 glusterbot joined #gluster
18:18 glusterbot joined #gluster
18:22 map1541 joined #gluster
18:29 pdrakeweb joined #gluster
18:38 farhorizon joined #gluster
18:48 pdrakeweb joined #gluster
19:26 farhorizon joined #gluster
19:53 _KaszpiR_ joined #gluster
19:53 farhorizon joined #gluster
21:11 shyam joined #gluster
22:27 farhorizon joined #gluster
22:45 MrAbaddon joined #gluster
22:54 pioto joined #gluster
22:57 gbox Still trying to understand the many bricks per node approach.  Most people have bricks built from various RAID configurations (raid6,raid10,raidz) which should have better I/O than JBOD.  But I have that set up and the bottleneck is definitely disk I/O.  If gluster provides independent I/O across bricks the bottleneck would only be the bus and block layer + scheduling.  That's a big IF but I can see how that approach would work.
23:22 Acinonyx joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary