Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-10-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 msvbhat joined #gluster
00:56 BlackoutWNCT Hey guys, I've got an issue atm with my glusterfs NFS mount, this is the output from the log file:  connection to 127.0.1.1:49153 failed (Connection refused); disconnecting socket
00:57 BlackoutWNCT I don't have any firewalls in place which would cause this machine to not be able to connect to itself.
00:57 BlackoutWNCT Regardless, I added the following rule to UFW : 127.0.0.0/8                ALLOW IN    127.0.0.0/8
00:57 BlackoutWNCT That has not resolved the issue.
00:58 BlackoutWNCT I've rebooted the host, restarted both the rpcbind and glusterfs-server services, ensured that the nfs-kernel-server isn't running.
00:58 BlackoutWNCT I'm all out of ideas at this stage.
00:59 BlackoutWNCT I've also confirmed that I have the nfs.disable set to "off"
00:59 BlackoutWNCT on the gluster volume.
00:59 BlackoutWNCT the server.allow-insecure is also set to "on"
01:08 m0zes joined #gluster
01:09 msvbhat joined #gluster
01:10 major_ joined #gluster
01:18 nh2 can I covince `gluster volume info` to not use the global log file, so I can run it as non-root?
01:19 nh2 it seems I can run `gluster --log-file=myfile volume info`, but the first and only thing it prints into myfile is that it can't open /var/log/glusterfs/...
01:24 nh2 ah actually that does work, my file contents are old. Now it complains that it can't read the ssl private key, which makes sense
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:58 shyu joined #gluster
02:02 gospod2 joined #gluster
02:10 msvbhat joined #gluster
02:10 msvbhat_ joined #gluster
02:32 jiffin joined #gluster
02:43 bEsTiAn joined #gluster
03:15 ramteid joined #gluster
03:30 aravindavk joined #gluster
03:42 nbalacha joined #gluster
03:43 bEsTiAn joined #gluster
03:51 psony joined #gluster
03:52 itisravi joined #gluster
03:56 jkroon joined #gluster
04:00 msvbhat joined #gluster
04:00 msvbhat_ joined #gluster
04:10 kramdoss_ joined #gluster
04:13 dominicpg joined #gluster
04:15 Prasad joined #gluster
04:24 apandey joined #gluster
04:25 rafi1 joined #gluster
04:25 atinmu joined #gluster
04:26 msvbhat joined #gluster
04:26 msvbhat_ joined #gluster
04:29 pioto joined #gluster
04:32 Shu6h3ndu joined #gluster
04:58 msvbhat joined #gluster
04:58 msvbhat_ joined #gluster
05:01 skumar joined #gluster
05:04 kdhananjay joined #gluster
05:08 Prasad_ joined #gluster
05:11 xavih joined #gluster
05:14 rouven joined #gluster
05:14 Prasad joined #gluster
05:15 xavih joined #gluster
05:26 poornima joined #gluster
05:29 karthik_us joined #gluster
05:33 sanoj joined #gluster
05:38 hgowtham joined #gluster
05:41 ppai joined #gluster
05:41 Prasad_ joined #gluster
05:42 rouven joined #gluster
05:47 aravindavk joined #gluster
05:55 kotreshhr joined #gluster
05:56 Prasad joined #gluster
05:57 xavih_ joined #gluster
05:58 shdeng joined #gluster
06:04 armyriad joined #gluster
06:10 ThHirsch joined #gluster
06:17 Saravanakmr joined #gluster
06:22 Klas catphish: not that simple unfortunately. I think that the cleanest solution in general would be to mount an nfs client (faster than the FUSE client) before the backup job and then unmount it afterward (makes stale mounts and HA issues very rare).
06:27 msvbhat joined #gluster
06:27 msvbhat_ joined #gluster
06:30 xavih joined #gluster
06:31 aravindavk joined #gluster
06:37 jtux joined #gluster
06:51 ivan_rossi joined #gluster
06:52 karthik_us joined #gluster
06:55 [diablo] joined #gluster
06:59 mbukatov joined #gluster
07:00 jiffin joined #gluster
07:00 rafi joined #gluster
07:13 kotreshhr joined #gluster
07:18 xavih joined #gluster
07:29 skoduri joined #gluster
07:31 rafi2 joined #gluster
07:39 fsimonce joined #gluster
07:40 ThHirsch joined #gluster
07:42 buvanesh_kumar joined #gluster
07:44 rouven joined #gluster
07:48 kotreshhr joined #gluster
08:05 _KaszpiR_ joined #gluster
08:11 itisravi joined #gluster
08:28 rwheeler joined #gluster
08:32 kdhananjay joined #gluster
08:37 sanoj joined #gluster
08:38 xavih joined #gluster
08:47 rouven joined #gluster
08:48 nbalacha joined #gluster
08:49 atinm|mtg joined #gluster
09:02 omie888777 joined #gluster
09:08 sanoj joined #gluster
09:13 jkroon joined #gluster
09:27 kramdoss_ joined #gluster
09:37 atinm|mtg joined #gluster
09:40 rastar joined #gluster
09:44 msvbhat joined #gluster
09:46 nbalacha joined #gluster
09:47 msvbhat_ joined #gluster
09:48 Wizek_ joined #gluster
09:53 hgowtham joined #gluster
10:07 kramdoss_ joined #gluster
10:44 jiffin1 joined #gluster
10:44 jtux joined #gluster
10:47 jiffin joined #gluster
10:48 rouven joined #gluster
11:13 ramteid joined #gluster
11:13 shyu joined #gluster
11:25 itisravi joined #gluster
11:42 burn Hi, I have a glusterfs volume with replica's 4 (1 brick per server). Can I just add 4 new bricks and add them to grow the volume?
11:43 burn But how does glusterfs know which bricks are meant for replication and which for growing the volume?
11:48 karthik_us joined #gluster
11:49 kotreshhr left #gluster
11:54 MrAbaddon joined #gluster
12:02 mattmcc_ joined #gluster
12:06 skumar joined #gluster
12:09 pdrakeweb joined #gluster
12:13 Prasad_ joined #gluster
12:19 mramirkhan joined #gluster
12:21 baber joined #gluster
12:23 mramirkhan Hello, I'm a little confused on gluster. I have some virtual machines i need to make, active-active 2 node NFS servers. My back end is all raided up equallogic so no need for redundancy etc. My question is, with what i said should i just use clvm or does gluster bring anything to the table?. sorry for the newbie question
12:24 mramirkhan I dont need reducanacy but i do need DLM with 2 active nodes.
12:25 rafi joined #gluster
12:27 karthik_us joined #gluster
12:30 msvbhat joined #gluster
12:30 msvbhat_ joined #gluster
12:32 prasanth joined #gluster
12:43 Prasad joined #gluster
12:44 nbalacha joined #gluster
12:49 side_control joined #gluster
12:58 shyam joined #gluster
12:59 apandey joined #gluster
13:12 skumar joined #gluster
13:18 aravindavk joined #gluster
13:23 dxlsm good day #gluster
13:27 07IABEXKV joined #gluster
13:27 92AAB6KAZ joined #gluster
13:29 Klas burn: you just say how many copies should exist with "replica #", and the bricks need to be added in multiples of that
13:29 Klas the rest is mostly handled, though rebalancing might be needed
13:30 jiffin joined #gluster
13:30 burn Klas, ok, so I have replica's 4 with 4 bricks, means I just add 4 new bricks of the same size?
13:56 MrAbaddon joined #gluster
13:58 nbalacha joined #gluster
14:05 skylar1 joined #gluster
14:08 dominicpg joined #gluster
14:14 catphish joined #gluster
14:14 catphish why does gluster recommend against the use of a directory on a root partition as a brick?
14:24 rafi2 joined #gluster
14:35 Klas burn: size is not a requirement
14:36 Klas catphish: since it's always a bad idea to mix system partitions with useable volumes
14:39 hmamtora_ joined #gluster
14:39 hmamtora joined #gluster
14:43 farhorizon joined #gluster
14:45 skylar1 joined #gluster
14:46 TBlaar joined #gluster
14:48 catphish Klas: is that the only reason then?
14:49 msvbhat joined #gluster
14:49 Klas not sure
14:49 msvbhat_ joined #gluster
14:49 catphish it just seems surprisingly strict about it
14:50 Klas nah, seems sane
14:50 Klas they are also strict with not using the root of a partition for a brick
14:50 Klas they have no quota systems and so forth, so unless you limit it, it will break
14:51 catphish those 2 requirements seem contradictory in a way
14:52 catphish since the main reason i'm aware of not to use a system partition for a data is the risk of filling up the disk, i'd think it wouldn't be sane to put 2 bricks on the same partition for the same reason
14:54 catphish but yeah, usually it would be up to an administrator to decide whether to share data with their root partition, so i wondered if there was a specific reason why gluster was so keen to prevent it
14:54 catphish i'm happy to follow the recommendation, but i'd be very interested to know why
14:55 kpease joined #gluster
14:56 kpease_ joined #gluster
14:58 ndevos catphish: not using the root-disk prevents problems where Gluster users fill up a volume and the storage server run out of space and fail for normal operations
14:59 ndevos the subdir of a mountpoint is done to have a check if the mountpoint+subdir exists when the brick process starts, if the subdir is not there after a reboot, the disk for the brick is probably not mounted
15:04 vbellur joined #gluster
15:04 plarsen joined #gluster
15:04 plarsen joined #gluster
15:06 snehring joined #gluster
15:09 major_ joined #gluster
15:10 baber joined #gluster
15:11 catphish ndevos: thaks
15:17 rwheeler joined #gluster
15:25 nbalacha joined #gluster
15:47 kramdoss_ joined #gluster
16:03 vbellur joined #gluster
16:07 Norky joined #gluster
16:20 baber joined #gluster
16:25 kramdoss_ joined #gluster
16:27 Gambit15 joined #gluster
16:27 rafi joined #gluster
16:28 vbellur joined #gluster
16:35 ivan_rossi left #gluster
16:35 somari left #gluster
16:41 baber joined #gluster
16:41 shyam1 joined #gluster
16:41 msvbhat joined #gluster
16:41 msvbhat_ joined #gluster
16:43 MrAbaddon joined #gluster
16:49 shyam joined #gluster
16:50 shyam joined #gluster
17:17 shyam joined #gluster
17:19 rouven joined #gluster
17:23 _KaszpiR_ joined #gluster
17:39 [diablo] joined #gluster
18:03 rouven_ joined #gluster
18:06 rouven joined #gluster
18:07 jkroon joined #gluster
18:13 jiffin joined #gluster
18:14 jkroon joined #gluster
18:21 jefarr joined #gluster
18:46 jiffin joined #gluster
18:47 nbfuel joined #gluster
18:50 nbfuel We're on glusterfs 3.7.6 on linux 4.10.0-32-generic/ubuntu gluster has been stable for 2 months, but recently, we've seen CPU use spike on the 3-node gluster system, and in our kubernettes nodes that mount the gluster shares, we'll hit a soft kernel panic that reboots the VM.
18:53 nbfuel The servers mounting the shares are nginx, mostly serving static assets-- the call static shows nginx with fuse_simple_request and fuse_lookup_name.
18:53 glusterbot nbfuel: assets's karma is now -1
18:56 nbfuel Somewhat stumped with what to try next.  Gluster has been solid for months.
19:11 buvanesh_kumar joined #gluster
19:18 dlambrig joined #gluster
19:54 pcdummy joined #gluster
19:54 pcdummy joined #gluster
19:58 farhorizon joined #gluster
20:08 dlambrig joined #gluster
20:08 baber joined #gluster
20:44 vbellur joined #gluster
20:45 vbellur joined #gluster
20:46 vbellur joined #gluster
20:50 Acinonyx joined #gluster
20:51 pasqualeiv joined #gluster
20:52 pasqualeiv happy Monday #gluster.
20:56 baber joined #gluster
20:59 farhorizon joined #gluster
21:10 dlambrig joined #gluster
21:16 omie888777 joined #gluster
21:28 farhorizon joined #gluster
21:31 dlambrig joined #gluster
21:41 nbfuel Another data point!  It looks like our reboots are timed with ` fd cleanup on` lines in the brick logs.
21:51 Jacob843 joined #gluster
22:17 shyam joined #gluster
22:22 jarsp joined #gluster
22:22 jarsp hi
22:22 glusterbot jarsp: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answ
22:24 jarsp we are running a gluster filesystem on a small cluster, and we find that it keeps breaking when writing to a large number (~2-3 thousand) of small files
22:25 jarsp i realize this is not the recommended usage but is this the expected behavior?
22:26 sj_ joined #gluster
22:33 john51 joined #gluster
22:38 john51 joined #gluster
23:04 Teraii joined #gluster
23:10 dlambrig joined #gluster
23:15 shyam joined #gluster
23:26 cyberbootje joined #gluster
23:31 dlambrig joined #gluster
23:35 vbellur joined #gluster
23:44 PatNarciso_ is master-master geo-replication supported?   i'd like to find a way around the delays introduced in an unadvised WAN setup.   miami-sdiego-nyc WAN... not linksys wan.
23:55 baber joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary