Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2018-01-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 ic0n joined #gluster
00:26 protoporpoise joined #gluster
00:37 protoporpoise joined #gluster
01:12 ic0n_ joined #gluster
01:16 ic0n joined #gluster
01:21 d-fence joined #gluster
01:25 jcall joined #gluster
01:26 renout14 joined #gluster
01:40 MrAbaddon joined #gluster
01:48 gyadav joined #gluster
01:53 ic0n joined #gluster
02:09 daMaestro joined #gluster
02:17 ic0n joined #gluster
02:20 kettlewell joined #gluster
02:38 MrAbaddon joined #gluster
02:57 ilbot3 joined #gluster
02:57 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:02 susant joined #gluster
03:09 nbalacha joined #gluster
03:11 Vishnu__ joined #gluster
03:38 ic0n joined #gluster
03:44 atinm_ joined #gluster
03:49 sanoj joined #gluster
03:55 kdhananjay joined #gluster
03:57 itisravi joined #gluster
03:59 gyadav joined #gluster
04:02 ic0n joined #gluster
04:06 srsc a few of these warning messages show up periodically in the mount logs: https://hastebin.com/ewonihaves.css
04:06 glusterbot Title: hastebin (at hastebin.com)
04:06 psony_ joined #gluster
04:07 srsc also noticed that `gluster volume heal erasure info` produces a list of about 6 million gfids
04:08 Vishnu_ joined #gluster
04:08 srsc this volume is exploratory/backup, so it could be destroyed and recreated when the failed node is restored. i'm mostly trying to evaluate stability/feasibility and work out any config issues.
04:37 hgowtham joined #gluster
04:49 kramdoss_ joined #gluster
05:02 ompragash joined #gluster
05:04 skumar joined #gluster
05:10 Shu6h3ndu joined #gluster
05:13 ic0n joined #gluster
05:19 jiffin joined #gluster
05:28 ompragash_ joined #gluster
05:31 ndarshan joined #gluster
05:31 karthik_us joined #gluster
05:33 kotreshhr joined #gluster
05:33 protoporpoise left #gluster
05:50 msvbhat joined #gluster
05:54 sunnyk joined #gluster
06:00 ic0n joined #gluster
06:07 plarsen joined #gluster
06:09 Saravanakmr joined #gluster
06:15 Prasad_ joined #gluster
06:16 ic0n joined #gluster
06:17 Humble joined #gluster
06:27 apandey joined #gluster
06:30 beetlebum joined #gluster
06:32 susant joined #gluster
06:32 TBlaar2 joined #gluster
06:36 kramdoss_ joined #gluster
06:37 susant joined #gluster
06:39 xavih joined #gluster
06:41 drymek joined #gluster
06:46 ic0n joined #gluster
06:52 ompragash__ joined #gluster
06:54 varshar joined #gluster
07:03 aravindavk joined #gluster
07:05 kramdoss_ joined #gluster
07:11 varshar joined #gluster
07:12 ic0n joined #gluster
07:12 ppai joined #gluster
07:23 jtux joined #gluster
07:29 Humble joined #gluster
07:52 msvbhat joined #gluster
07:58 mbukatov joined #gluster
08:00 rafi1 joined #gluster
08:01 ic0n joined #gluster
08:07 susant joined #gluster
08:12 susant joined #gluster
08:20 b_bezak joined #gluster
08:20 b_bezak left #gluster
08:27 beetlebum joined #gluster
08:32 msvbhat joined #gluster
08:34 beetlebum joined #gluster
08:38 beetlebum joined #gluster
08:43 fsimonce joined #gluster
08:51 fsimonce joined #gluster
08:52 ompragash joined #gluster
08:55 msvbhat joined #gluster
08:58 beetlebum_ joined #gluster
08:59 ic0n joined #gluster
09:09 sanoj joined #gluster
09:10 kshlm joined #gluster
09:11 DV joined #gluster
09:11 ahino joined #gluster
09:13 kshlm joined #gluster
09:19 kramdoss_ joined #gluster
09:22 [fre] Guys, by now many of you are probably aware of the Intel-cpu-issues, I guess.
09:22 buvanesh_kumar joined #gluster
09:22 [fre] Do you have any idea how patching the linux-kernels will affect performance of gluster & incoming requests?
09:28 misc I guess not much, I suspect gluster is any I/O bound
09:28 kramdoss_ joined #gluster
09:29 sanoj joined #gluster
09:34 [fre] If my understanding is correct, any network- or fs-based access involves kernel-actions and as such kernel switching memspace?
09:38 misc yeah, but there is a question of magnitude
09:38 misc if it take 1ms to do a syscal before and 2ms after, that's bad
09:38 misc but if then it take 500ms to get data from disk, the operation went from 501 to 502
09:39 misc which isn't meaninigfully different
09:39 misc (I pulled number out of thin air for the purpose of the demonstration)
09:39 beetlebum_ joined #gluster
09:39 ompragash_ joined #gluster
09:41 ThHirsch joined #gluster
09:42 omark joined #gluster
09:44 ThHirsch1 joined #gluster
09:44 ompragash__ joined #gluster
09:49 msvbhat joined #gluster
09:49 ompragash_ joined #gluster
09:50 hgowtham joined #gluster
09:51 [diablo] joined #gluster
10:13 [fre] Hmm... if you state the numbers that way.... but that doesn't make any sense in already stated numbers from more 'generic' benchmarking.
10:14 [fre] Did somebody try benchmarking it 'really'? I sadly don't have the infrastructure to try and measure the impact. But tempted to keep it all disabled for that matter.
10:17 misc well, that's the problem with benchmark, they are here to test isolated components to find regressions
10:17 misc but that's hard to translate to what happen with complex systems
10:17 misc [fre]: how do you measure your perf at the moment ?
10:18 misc (cause I assume you do not have to test the change, but in prod, you have something ?)
10:19 ndarshan joined #gluster
10:28 rouven joined #gluster
10:30 mbukatov joined #gluster
10:30 ic0n joined #gluster
10:35 gyadav joined #gluster
10:42 rafi joined #gluster
10:44 sunnyk joined #gluster
10:50 janlam7 joined #gluster
10:52 ompragash_ joined #gluster
10:54 kdhananjay joined #gluster
10:56 ic0n joined #gluster
11:06 skumar_ joined #gluster
11:10 skumar__ joined #gluster
11:17 MrAbaddon joined #gluster
11:23 ic0n joined #gluster
11:27 rsalmon joined #gluster
11:27 nishanth joined #gluster
11:29 rafi joined #gluster
11:29 omark joined #gluster
11:30 shellclear_ joined #gluster
11:32 rafi1 joined #gluster
11:34 atinm joined #gluster
11:39 shellclear joined #gluster
11:40 arif-ali joined #gluster
11:45 TBlaar joined #gluster
11:59 ic0n joined #gluster
12:00 arif-ali joined #gluster
12:00 ompragash__ joined #gluster
12:07 buvanesh_kumar_ joined #gluster
12:09 [fre] misc, it's purely based on MB/s...doing mostly huge amounts of small files, kernel is & will be involved many times
12:09 TBlaar joined #gluster
12:12 itisravi__ joined #gluster
12:16 kettlewell joined #gluster
12:17 itisravi joined #gluster
12:17 nbalacha joined #gluster
12:20 TBlaar joined #gluster
12:23 TBlaar2 joined #gluster
12:25 mbukatov joined #gluster
12:27 TBlaar joined #gluster
12:33 karthik_us joined #gluster
12:37 msvbhat joined #gluster
12:40 TBlaar joined #gluster
12:42 beetlebum joined #gluster
12:44 rsalmon joined #gluster
12:46 rsalmon joined #gluster
12:47 rsalmon_ joined #gluster
12:48 kotreshhr left #gluster
12:48 rsalmon_ joined #gluster
12:53 shellclear joined #gluster
12:54 marbu joined #gluster
12:56 nbalacha joined #gluster
12:57 ic0n joined #gluster
12:57 rsalmon_ joined #gluster
13:01 rsalmon joined #gluster
13:16 rsalmon_ joined #gluster
13:31 rsalmon_ joined #gluster
13:34 major joined #gluster
13:34 Rakkin_ joined #gluster
13:35 ic0n joined #gluster
13:41 saybeano joined #gluster
13:52 nbalacha joined #gluster
13:55 ic0n joined #gluster
13:58 jiffin joined #gluster
14:01 ahino1 joined #gluster
14:06 psony|afk joined #gluster
14:13 ic0n joined #gluster
14:14 msvbhat joined #gluster
14:20 gyadav joined #gluster
14:20 kpease joined #gluster
14:26 psony|afk joined #gluster
14:29 skylar1 joined #gluster
14:32 Shu6h3ndu joined #gluster
14:34 jobewan joined #gluster
14:35 nishanth joined #gluster
14:37 ic0n joined #gluster
14:44 Rakkin_ joined #gluster
14:46 phlogistonjohn joined #gluster
14:47 MrAbaddon joined #gluster
14:49 kramdoss_ joined #gluster
14:50 nishanth joined #gluster
14:54 shellclear joined #gluster
14:57 ic0n joined #gluster
15:10 psony|afk joined #gluster
15:15 jbrooks joined #gluster
15:17 jkroon joined #gluster
15:20 ic0n joined #gluster
15:37 kramdoss_ joined #gluster
15:39 ahino joined #gluster
15:50 atinm_ joined #gluster
15:51 wolfshappen joined #gluster
15:55 ic0n joined #gluster
15:55 plarsen joined #gluster
15:58 Humble joined #gluster
15:58 jkroon joined #gluster
16:05 ompragash joined #gluster
16:07 ompragash_ joined #gluster
16:20 TWD-GM joined #gluster
16:20 TWD-GM hey guys, i need some advice, fairly new gluster user (inherited system)
16:21 TWD-GM can anyone help tell me how gluster will react to the following scenario ?
16:22 TWD-GM 3 note setup 1x arb 2x stores - pushing out various NFS datastores
16:23 TWD-GM i am currently copying files from 1 DS to another DS directly on one of the filers - will the arbi pickup the change and write the same files to the 2nd filer ?
16:23 JoeJulian Unlikely.
16:24 TWD-GM do i need to do anything for the arbi to pickup the change and show me the files in the NFS datastore ?
16:24 TWD-GM sync/update etc ?
16:24 JoeJulian By writing directly to a brick, you're bypassing the addition of metadata to the ,,(extended attributes). This causes glusterfs to not even know the files exist.
16:24 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
16:25 TWD-GM the reason for copying the files directly is due to the NFS share failing
16:25 JoeJulian You *might* be able to fix that with a "gluster volume heal $volname full" but there are no guarantees.
16:25 JoeJulian You will get, by definition, undocumented behavior.
16:26 TWD-GM is there a way i can do this "safely" ?
16:27 Vapez joined #gluster
16:28 ic0n joined #gluster
16:29 JoeJulian Mount the volume somewhere and copy to that.
16:29 TWD-GM with the full heal suggestion, how do i wait for the copys to complete or can i do a heal start early to get the bulk healed or will further copies break the healing ?
16:30 JoeJulian heal full starts with a crawl of the filesystem so any files added after that crawl will not be picked up.
16:30 TWD-GM but if i heal again after the copy is complete it will just pickup the final files ?
16:31 TWD-GM smaller/quicker heal ?
16:31 JoeJulian I don't even know for sure that it'll work.
16:32 JoeJulian I'd really just mount the volume and copy to that.
16:32 plarsen joined #gluster
16:32 JoeJulian You can mount it on the same machine that you're currently copying to.
16:32 jstrunk joined #gluster
16:32 snehring anyone seen any performance degradation with gluster and the KPTI patches?
16:32 JoeJulian Just like the hair club for men, your node can also be a client.
16:33 JoeJulian Oh, this is clearly going to require a factoid.
16:36 JoeJulian @learn kpti as Most file based operations take ~500 times longer than the operations affected by the KPTI patch. No actual benchmarking has been reported, but the percent difference in operations should be negligible.
16:36 glusterbot JoeJulian: The operation succeeded.
16:39 snehring Was worried about the impact at brick level, but even if it does end up being significant I should think the risk of disabling it on machines that no one should be running arbitrary code on would be minimal
16:40 snehring in my environment anyway
16:41 JoeJulian Seems reasonable to me.
16:41 TWD-GM thanks joe, sorry i am a bit of a linux noob
16:42 JoeJulian No problem!
16:42 TWD-GM so i can mount the NFS share on one of the hosts
16:42 TWD-GM sorry one of the filers ?
16:42 JoeJulian Better a linux noob than a windows expert. ;)
16:42 TWD-GM lol :)
16:42 JoeJulian Yes, you can mount nfs or fuse on the server.
16:43 sunny joined #gluster
16:43 TWD-GM sorry can you noob'ify for me - mount /ip:/share ?
16:43 JoeJulian mount -t glusterfs localhost:$myvolume /mnt
16:44 JoeJulian obviously, $myvolume is the name of your volume.
16:44 TWD-GM thx
16:50 TWD-GM i am not seeing any files in the mount point :( so i assume the share name is wrong
16:53 MrAbaddon joined #gluster
16:55 dominicpg joined #gluster
17:05 JoeJulian Not necessarily. It could be how you loaded the files. Sorry, I've got meetings for a bit. bbl.
17:06 TWD-GM its a vmfs volume
17:06 ThHirsch joined #gluster
17:06 jiffin joined #gluster
17:06 TWD-GM which probably wont help..
17:09 ic0n joined #gluster
17:12 omark joined #gluster
17:14 TWD-GM anyone else able to assist while joes away ?
17:15 TWD-GM (he says while clutching at straws..)
17:20 jkroon joined #gluster
17:20 renout14 joined #gluster
17:35 vbellur joined #gluster
17:39 pioto joined #gluster
17:44 ThHirsch joined #gluster
17:45 ic0n joined #gluster
18:06 Humble joined #gluster
18:07 jiffin joined #gluster
18:09 rafi1 joined #gluster
18:13 msvbhat joined #gluster
18:21 ic0n joined #gluster
18:22 skylar1 joined #gluster
18:41 skylar1 joined #gluster
18:49 mlhess joined #gluster
18:51 ic0n joined #gluster
19:05 ahino joined #gluster
19:29 rouven joined #gluster
19:36 nirokato joined #gluster
19:40 ic0n joined #gluster
19:55 ic0n joined #gluster
20:01 drymek joined #gluster
20:18 DV joined #gluster
20:19 ic0n joined #gluster
20:35 ic0n joined #gluster
20:35 drymek joined #gluster
20:49 snehring joined #gluster
20:50 DV joined #gluster
20:51 caitnop joined #gluster
21:00 ic0n joined #gluster
21:15 protoporpoise joined #gluster
21:21 ic0n joined #gluster
21:26 jstrunk_ joined #gluster
21:34 major joined #gluster
21:36 ic0n joined #gluster
21:43 rouven joined #gluster
21:47 illwieckz joined #gluster
21:51 ic0n joined #gluster
21:53 freephile joined #gluster
21:54 freephile joined #gluster
22:10 kpease_ joined #gluster
22:11 illwieckz joined #gluster
22:14 illwieckz joined #gluster
22:26 ic0n joined #gluster
22:51 ic0n joined #gluster
23:02 illwieckz joined #gluster
23:11 ic0n joined #gluster
23:23 shellclear joined #gluster
23:36 plarsen joined #gluster
23:50 ic0n joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary