Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-08-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 masuberu joined #gluster
01:24 shyam joined #gluster
01:28 wadeholler joined #gluster
01:28 Lee1092 joined #gluster
01:33 shyam left #gluster
01:34 aj__ joined #gluster
01:39 hagarth joined #gluster
01:42 Javezim Am getting an error whilst trying to boot a VMware VM that's storage is hosted on Glusterfs, "116 Stale NFS File Handle", does anyone know how to resolve this?
01:46 shdeng joined #gluster
01:48 cliluw joined #gluster
01:56 Wizek joined #gluster
02:04 shdeng joined #gluster
02:08 chirino joined #gluster
02:22 harish_ joined #gluster
02:25 Wizek joined #gluster
02:28 julim joined #gluster
02:44 newdave joined #gluster
03:01 Gambit15 joined #gluster
03:11 magrawal joined #gluster
03:28 skoduri joined #gluster
03:44 nbalacha joined #gluster
03:49 atinm joined #gluster
03:50 nishanth joined #gluster
03:55 arcolife joined #gluster
04:24 ira joined #gluster
04:30 itisravi joined #gluster
04:30 elico joined #gluster
04:34 shubhendu joined #gluster
04:37 poornimag joined #gluster
04:37 elico left #gluster
04:43 aspandey joined #gluster
04:44 Bhaskarakiran joined #gluster
04:48 Wizek joined #gluster
04:50 kdhananjay joined #gluster
04:50 sanoj joined #gluster
04:52 nehar joined #gluster
04:54 itisravi joined #gluster
04:55 RameshN joined #gluster
04:55 Apeksha joined #gluster
05:03 ppai joined #gluster
05:03 kovshenin joined #gluster
05:12 karthik_ joined #gluster
05:19 jiffin joined #gluster
05:24 Muthu joined #gluster
05:25 aravindavk joined #gluster
05:26 ndarshan joined #gluster
05:33 jiffin1 joined #gluster
05:36 hgowtham joined #gluster
05:36 Manikandan joined #gluster
05:36 aravindavk joined #gluster
05:40 mchangir joined #gluster
05:45 poornimag joined #gluster
05:45 aspandey joined #gluster
05:46 rafi joined #gluster
05:48 karthik_ joined #gluster
05:49 ppai joined #gluster
05:54 rafi1 joined #gluster
06:00 level7 joined #gluster
06:00 masuberu joined #gluster
06:09 bwerthmann joined #gluster
06:16 mchangir joined #gluster
06:20 jtux joined #gluster
06:20 karnan joined #gluster
06:26 nishanth joined #gluster
06:26 rastar joined #gluster
06:36 hchiramm joined #gluster
06:41 harish_ joined #gluster
06:49 rastar joined #gluster
06:50 nohitall_ left #gluster
06:52 deniszh joined #gluster
06:54 Philambdo joined #gluster
06:58 [diablo] joined #gluster
07:00 level7 joined #gluster
07:04 satya4ever joined #gluster
07:05 javiM left #gluster
07:05 jiffin joined #gluster
07:06 msvbhat joined #gluster
07:08 rafi1 joined #gluster
07:09 aravindavk joined #gluster
07:10 devyani7_ joined #gluster
07:13 anil joined #gluster
07:13 devyani7_ joined #gluster
07:14 fsimonce joined #gluster
07:18 Manikandan joined #gluster
07:18 ashiq joined #gluster
07:22 ppai joined #gluster
07:23 karthik_ joined #gluster
07:26 atalur joined #gluster
07:28 aspandey joined #gluster
07:30 poornimag joined #gluster
07:45 suliba joined #gluster
07:50 mchangir joined #gluster
07:50 Apeksha_ joined #gluster
07:52 level7 joined #gluster
07:52 arcolife joined #gluster
07:53 Bhaskarakiran joined #gluster
07:55 Bhaskarakiran joined #gluster
07:56 MikeLupe joined #gluster
07:56 arcolife joined #gluster
07:58 deniszh joined #gluster
08:00 harish_ joined #gluster
08:07 somlin22 joined #gluster
08:13 ashiq joined #gluster
08:14 om joined #gluster
08:18 Slashman joined #gluster
08:18 hchiramm joined #gluster
08:43 lezo joined #gluster
08:52 mdavidson joined #gluster
08:58 Apeksha joined #gluster
09:03 Manikandan joined #gluster
09:16 atalur joined #gluster
09:23 hackman joined #gluster
09:32 aj__ joined #gluster
09:38 msvbhat joined #gluster
10:07 atalur joined #gluster
10:12 mchangir joined #gluster
10:31 bluenemo joined #gluster
10:39 karthik_ joined #gluster
10:48 suliba joined #gluster
10:56 Wizek joined #gluster
11:03 msvbhat joined #gluster
11:05 pur joined #gluster
11:09 Apeksha joined #gluster
11:15 ramky joined #gluster
11:16 Norky joined #gluster
11:17 jtux joined #gluster
11:22 hchiramm joined #gluster
11:25 Wizek joined #gluster
11:34 Siavash__ joined #gluster
11:36 Saravanakmr joined #gluster
11:45 robb_nl joined #gluster
11:51 ramky joined #gluster
11:54 Wizek joined #gluster
11:54 Manikandan joined #gluster
12:12 julim joined #gluster
12:15 Apeksha joined #gluster
12:15 nbalacha joined #gluster
12:28 suliba joined #gluster
12:30 misc so, I am a bit puzzled, but is it valid in a gluster config file:
12:30 misc option cache-size `grep 'MemTotal' /proc/meminfo  | awk '{print $2 * 0.2 / 1024}' | cut -f1 -d.`MB
12:31 R4yTr4cer joined #gluster
12:32 hagarth joined #gluster
12:35 kkeithley I suspect not
12:35 nigelb what the....
12:35 nigelb I hope the config doesn't evaluate bash.
12:36 nigelb That would be bad™
12:36 kkeithley RealBad®
12:37 misc https://infrastructure.fedoraproject.org/cgit/ansible.git/tree/roles/gluster/client/templates/client.config#n41
12:37 glusterbot Title: ansible.git - ansible playbook/files/etc repository for fedora infrastructure. (at infrastructure.fedoraproject.org)
12:37 misc that's interesting that gluster do not seems to have trouble to parse the config
12:38 misc but I guess i can grab a sysadmin while at flock :)
12:45 chirino joined #gluster
12:54 ndevos misc: yes, blame puiterwijk
12:57 misc ndevos: he want to avoid me since this morning
12:57 misc I said "hey patrick, we need you to speak about openstack"
12:57 misc he trhew a smoke grenade, and disappeared from Poland
12:57 somlin22 joined #gluster
12:59 ndevos misc: oh, yes, openstack definitely is not his favorite topic
12:59 ndevos misc: you can make him disapear by mentioning copr too
13:00 misc ndevos: true :)
13:05 plarsen joined #gluster
13:08 plarsen joined #gluster
13:10 siada joined #gluster
13:15 Apeksha joined #gluster
13:18 dnunez joined #gluster
13:25 _md2k_ joined #gluster
13:25 plarsen joined #gluster
13:32 jiffin1 joined #gluster
13:36 archit_ joined #gluster
13:36 aravindavk joined #gluster
13:47 hagarth joined #gluster
13:49 siada how can I force 2 gluster volumes to sync up, I'm getting 'possible split-brain' and I just want to ignore that and take the contents of a volume as gospel and sync up. is this possible?
13:50 ahino joined #gluster
14:06 jwd joined #gluster
14:08 skylar joined #gluster
14:23 bowhunter joined #gluster
14:24 plarsen joined #gluster
14:28 dlambrig_ left #gluster
14:29 mbukatov joined #gluster
14:29 shaunm joined #gluster
14:29 ndevos siada: slide #15 of http://events.linuxfoundation.org/sites/events/files/slides/glusterfs-AFR-LinuxCon_EU-2015_0.pdf
14:30 ndevos siada: I think it is also possible that files are temporarily almost in split-brain when there is I/O going on, while the replicate-transaction is in progress
14:36 edong23 joined #gluster
14:38 siada ndevos: does it have any affect that the volumes aren't mounted as well? I've got a 2 node cluster, a volume on each node (I'm trying to use gluster purely for file replication/syncing) but the volumes themselves are never mounted via 'mount' they're just folders that exist.
14:39 ndevos siada: gluster can only replicate and track changes when the volume is mounted and changes to the contents are made through that mountpoint
14:40 ndevos siada: accessing the bricks of the volume directly will get many untracked changes, and many other problems - don't do that :)
14:40 siada that's a bummer... does it have to be mounted as localhost:volname? the volume seems to run slower when mounted in this way as if it's actually going over the network to process stuff
14:41 ndevos yes, you can mount it from localhost somewhere
14:41 siada yes I know, but is there a way to mount it via glusterfs that doesn't specify a host? I already mount it with localhost but that seems to affect performance
14:42 msvbhat joined #gluster
14:42 ndevos siada: for example, I Like to use /bricks/<VOLUME>/data a path to the bricks that I pass to "gluster volume create", and then I mount localhost:/<VOLUME> on /mnt or somewhere I need it
14:42 ndevos siada: well, the replication is synchronously, it will need to go over the network in order to have the replicas in sync
14:43 kpease joined #gluster
14:44 siada ndevos: hmm, maybe gluster doesn't quite fit what I'm trying to do then... I'm just trying to keep the 'content' between 2 wordpress installs in sync
14:44 siada so that if you upload some media to a site, it's replicated to the sites mirrors (sites are load balanced, so separate servers)
14:45 ndevos siada: you can probably improve performance if you tune your webservers and ,,(php) installatin
14:45 glusterbot siada: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
14:46 ahino joined #gluster
14:54 shyam joined #gluster
14:58 bwerthmann joined #gluster
15:00 siada ndevos: I don't suppose you know off the top of your head how I can delete 1.3m files from a directory in linux? everything I've tried just hangs
15:03 ndevos siada: well, "rm -rf /path/to/dir/to/remove" will work, but it takes a while because all single files need to be removed
15:03 Jacob843 joined #gluster
15:03 siada my server connection times out before it completes :)
15:03 ndevos siada: you can do "rm -rfv ..." to show some progress and see if it really hangs
15:04 ndevos siada: run it in tmux or screen ;-)
15:05 siada the -rfv should probably keep me 'active' since its dumping to the screen
15:07 shubhendu joined #gluster
15:07 Wojtek joined #gluster
15:09 jiffin joined #gluster
15:11 wushudoin joined #gluster
15:12 Wojtek I'm having an issue with Gluster performances. It sometimes takes ~30 seconds to save a 10kb file to the volume. While there was no other activity on the nodes, I did an strace on one of the gluster processes that had a very high cpu usage. Here's a snapshot of a few seconds: https://paste.fedoraproject.org/401439/38964147/raw/. All my heals are disabled: https://paste.fedoraproject.org/4017
15:12 Wojtek 42/47040976/raw/ and I'm really unsure of what is causing Gluster to read the attributes of all of these files for no apparent reason.
15:12 glusterbot Title: Error (at paste.fedoraproject.org)
15:15 arcolife joined #gluster
15:17 bkolden joined #gluster
15:17 shyam joined #gluster
15:26 arcolife joined #gluster
15:26 Wizek joined #gluster
15:27 harish_ joined #gluster
15:28 aravindavk joined #gluster
15:35 plarsen joined #gluster
15:53 karnan joined #gluster
16:15 Wizek joined #gluster
16:28 bwerthmann joined #gluster
16:29 luis_silva joined #gluster
16:30 luis_silva hey guys, just a quick question. We going to upgrade from 3.5.3 to 3.7.13 next week. Are there any gotcha's we should be aware of?
16:31 luis_silva We just have a 10 node distributed cluster no replication.
16:32 luis_silva olume Name: sum1-gstore
16:32 luis_silva Type: Distribute
16:32 luis_silva Volume ID: ****************
16:32 luis_silva Status: Started
16:32 luis_silva Number of Bricks: 40
16:32 luis_silva Transport-type: tcp
16:33 luis_silva Options Reconfigured:
16:33 luis_silva diagnostics.client-log-level: ERROR
16:33 luis_silva diagnostics.brick-log-level: ERROR
16:33 luis_silva features.quota: on
16:33 luis_silva features.default-soft-limit: 100%
16:33 luis_silva features.quota-deem-statfs: on
16:34 luis_silva I'm following this doc https://gluster.readthedocs.io/en/latest/Upgrade-Guide/Upgrade%20to%203.7/
16:34 glusterbot Title: Upgrade to 3.7 - Gluster Docs (at gluster.readthedocs.io)
16:35 kpease joined #gluster
16:39 shyam1 joined #gluster
16:47 armyriad joined #gluster
16:49 Gnomethrower joined #gluster
17:02 om joined #gluster
17:02 kovshenin joined #gluster
17:12 arcolife joined #gluster
17:13 Wojtek my experience with 3.7 was that it crashed when you tried to edit some volume settings like healing
17:21 level7 joined #gluster
17:26 karnan joined #gluster
17:33 ahino joined #gluster
17:43 sanoj joined #gluster
17:43 armyriad joined #gluster
17:48 luis_silva ok good to know. We don't plan on making this a replicated cluster so hopefully we won't run into this.
17:55 newdave joined #gluster
18:07 bwerthmann joined #gluster
18:09 bowhunter joined #gluster
18:10 uebera|| joined #gluster
18:11 om joined #gluster
18:29 ramky joined #gluster
18:30 luis_silva joined #gluster
18:44 ahino joined #gluster
18:59 skoduri joined #gluster
19:00 jiffin joined #gluster
19:06 BitByteNybble110 joined #gluster
19:06 guhcampos joined #gluster
19:07 kovshenin joined #gluster
19:21 jiffin1 joined #gluster
19:32 natarej joined #gluster
19:36 kpease joined #gluster
19:50 Wojtek Ragarding my previous performance problem. I've ran Gluster with debug logs for a few moments and here's what I see. https://paste.fedoraproject.org/402058/70426455/raw/ There's some lines for entry self-heal despite it being turned off. Not sure if the memory pool full is bad or not. And not sure also why it's giving the 'failed to get the gfid from dict'
20:07 somlin22 joined #gluster
20:54 armyriad joined #gluster
20:58 rbartl joined #gluster
21:00 guhcampos joined #gluster
21:07 rbartl_ joined #gluster
21:08 shyam joined #gluster
21:10 armyriad joined #gluster
21:12 bowhunter joined #gluster
21:16 bagualistico joined #gluster
21:26 chirino joined #gluster
21:46 aj__ joined #gluster
21:56 bkolden joined #gluster
22:02 rastar_ joined #gluster
22:08 xavih_ joined #gluster
22:11 kovshenin joined #gluster
22:11 wiza joined #gluster
22:12 om joined #gluster
22:20 bagualistico joined #gluster
22:27 tdasilva joined #gluster
22:45 chirino_m joined #gluster
22:59 bagualistico joined #gluster
23:19 d0nn1e joined #gluster
23:50 chirino joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary