Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 Gnomethrower joined #gluster
00:08 MugginsM joined #gluster
00:14 jobewan joined #gluster
00:21 MugginsM joined #gluster
00:34 Biopandemic joined #gluster
00:38 haomaiwang joined #gluster
00:39 amye joined #gluster
00:39 jgjorgji joined #gluster
00:48 dlambrig_ joined #gluster
00:51 Biopandemic joined #gluster
00:53 Biopandemic joined #gluster
00:53 Biopandemic joined #gluster
00:57 jlockwood joined #gluster
01:01 haomaiwang joined #gluster
01:03 MikeLupe joined #gluster
01:07 jlockwood joined #gluster
01:27 jlockwood joined #gluster
01:30 EinstCrazy joined #gluster
01:32 mowntan joined #gluster
01:37 jlockwood joined #gluster
01:46 haomaiwang joined #gluster
01:47 jlockwood joined #gluster
01:51 Lee1092 joined #gluster
01:55 JesperA- joined #gluster
01:57 jlockwood joined #gluster
02:01 haomaiwang joined #gluster
02:07 jlockwood joined #gluster
02:17 jlockwood joined #gluster
02:27 jlockwood joined #gluster
02:30 MugginsM joined #gluster
02:36 MugginsM joined #gluster
02:41 plarsen joined #gluster
02:44 d0nn1e joined #gluster
02:47 jlockwood joined #gluster
02:57 jlockwood joined #gluster
02:57 gem joined #gluster
03:01 haomaiwang joined #gluster
03:07 jlockwood joined #gluster
03:17 jlockwood joined #gluster
03:21 atinm joined #gluster
03:24 bennyturns joined #gluster
03:34 RameshN joined #gluster
03:37 jlockwood joined #gluster
03:40 kshlm joined #gluster
03:40 julim joined #gluster
03:40 nehar joined #gluster
03:47 jlockwood joined #gluster
03:52 itisravi joined #gluster
03:56 jlockwood joined #gluster
04:01 haomaiwang joined #gluster
04:03 EinstCrazy joined #gluster
04:11 hgowtham joined #gluster
04:19 poornimag joined #gluster
04:23 shubhendu joined #gluster
04:24 ppai joined #gluster
04:26 gem joined #gluster
04:28 Saravanakmr joined #gluster
04:31 harish joined #gluster
04:33 hgowtham joined #gluster
04:35 prasanth joined #gluster
04:35 amye joined #gluster
04:38 sakshi joined #gluster
04:40 jgjorgji joined #gluster
04:56 gowtham joined #gluster
04:56 raghug joined #gluster
05:00 hgowtham joined #gluster
05:01 mchangir joined #gluster
05:01 haomaiwang joined #gluster
05:07 ndarshan joined #gluster
05:07 nbalacha joined #gluster
05:08 skoduri joined #gluster
05:12 rastar joined #gluster
05:15 rafi joined #gluster
05:16 Siavash joined #gluster
05:17 aspandey joined #gluster
05:23 Manikandan joined #gluster
05:25 kdhananjay joined #gluster
05:26 overclk joined #gluster
05:27 itisravi joined #gluster
05:28 nishanth joined #gluster
05:29 aravindavk joined #gluster
05:30 rwheeler joined #gluster
05:34 hchiramm joined #gluster
05:35 rafi1 joined #gluster
05:36 jiffin joined #gluster
05:39 Apeksha joined #gluster
05:39 Siavash joined #gluster
05:39 Siavash joined #gluster
05:48 rafi joined #gluster
05:50 karthik___ joined #gluster
06:00 jiffin1 joined #gluster
06:01 haomaiwang joined #gluster
06:03 RameshN joined #gluster
06:13 ppai joined #gluster
06:13 XpineX joined #gluster
06:15 level7 joined #gluster
06:22 atalur joined #gluster
06:25 Siavash joined #gluster
06:29 jtux joined #gluster
06:29 arcolife joined #gluster
06:29 sabansal_ joined #gluster
06:30 kotreshhr joined #gluster
06:36 spalai joined #gluster
06:38 karnan joined #gluster
06:41 k4n0 joined #gluster
06:43 [Enrico] joined #gluster
06:43 Siavash joined #gluster
06:43 Siavash joined #gluster
06:45 MugginsM joined #gluster
06:46 JPaul joined #gluster
06:50 MugginsM joined #gluster
06:51 gem joined #gluster
06:52 Intensity joined #gluster
06:52 RameshN joined #gluster
06:52 anil joined #gluster
06:56 raghug joined #gluster
06:57 jiffin1 joined #gluster
06:59 btpier joined #gluster
06:59 mchangir joined #gluster
07:00 natarej joined #gluster
07:01 haomaiwang joined #gluster
07:01 foster joined #gluster
07:04 atinm joined #gluster
07:05 karnan joined #gluster
07:05 JPaul joined #gluster
07:06 mowntan joined #gluster
07:07 kovshenin joined #gluster
07:12 itisravi joined #gluster
07:15 [Enrico] joined #gluster
07:16 spalai joined #gluster
07:19 amye joined #gluster
07:24 ctria joined #gluster
07:27 mchangir joined #gluster
07:27 JesperA joined #gluster
07:32 jtux joined #gluster
07:35 MikeLupe joined #gluster
07:38 JesperA joined #gluster
07:42 atinm joined #gluster
07:44 MugginsM joined #gluster
07:44 ctria joined #gluster
07:55 TvL2386 joined #gluster
08:01 haomaiwang joined #gluster
08:05 lmkone joined #gluster
08:05 lmkone hello all
08:13 arcolife joined #gluster
08:17 fsimonce joined #gluster
08:20 muneerse joined #gluster
08:20 MikeLupe joined #gluster
08:22 Gnomethrower joined #gluster
08:25 harish joined #gluster
08:25 lmkone after restarting one of the nodes (2 machines in replica config) my clients were unable to access the data - now on the servers I only see port 24007 listening and no ports for volumes. volume status command times out, so does stop or delete? Any ideas?
08:27 lmkone clients report "failed to get the port number for remote subvolume"
08:27 raghug joined #gluster
08:28 DV__ joined #gluster
08:28 muneerse joined #gluster
08:31 robb_nl joined #gluster
08:31 karthik___ joined #gluster
08:31 jiffin lmkone: check whether  gluster process are running or not
08:32 jiffin pgrep for gluster or ps ax | grep gluster
08:33 lmkone ijiffkin - it is, i can list volumes, peers, pools, but thats about it
08:34 jiffin lmkone: may be bricks are not up(i mean glusterfsd process), but glusterd is running fine
08:34 dlambrig_ joined #gluster
08:38 aravindavk joined #gluster
08:39 jgjorgji joined #gluster
08:40 jiffin joined #gluster
08:44 lmkone jiffin - you are right, glusterfsd is not running, but should that not be started by glusterd ?
08:46 hchiramm joined #gluster
08:48 jiffin lmkone: yes it should
08:49 jiffin lmkone: if ur backend brick is deleted or lost, then it won't
08:52 jiffin joined #gluster
08:54 lmkone jikkin : I am able to list bricks on both servers, config looks fine (and nothing was changed since before the restart and it was working fine), what log file should give me some more info?
08:58 atinm joined #gluster
08:59 jiffin lmkone:  /var/log/glusterfs/etc-gl(starting this)
09:01 haomaiwang joined #gluster
09:03 ppai joined #gluster
09:18 MikeLupe joined #gluster
09:18 level7 joined #gluster
09:33 karnan joined #gluster
09:42 mchangir joined #gluster
09:53 kotreshhr joined #gluster
10:01 haomaiwang joined #gluster
10:03 atalur joined #gluster
10:04 ppai joined #gluster
10:04 paul98 joined #gluster
10:04 paul98 can you expend a lun on the fly? e.g if we have one and run out of space is it easy to incrase it?
10:14 cholcombe joined #gluster
10:15 paul98_ joined #gluster
10:17 jlockwood joined #gluster
10:18 kshlm joined #gluster
10:22 ndevos paul98_: Gluster does not use LUNs, it builds on filesystems. Of you extend the filesystem on the bricks, Gluster can just use the increased space
10:23 ndevos paul98_: the more common way (and better tested) is to add more bricks to a Gluster volume
10:27 jlockwood joined #gluster
10:31 EinstCrazy joined #gluster
10:31 bfoster joined #gluster
10:32 lmkone wow - getting 50B/s write speed - but seriously, whats wrong? where should I start looking?
10:35 atinm joined #gluster
10:35 mbukatov joined #gluster
10:36 kovsheni_ joined #gluster
10:37 jlockwood joined #gluster
10:38 karnan joined #gluster
10:39 kovshenin joined #gluster
10:39 kotreshhr joined #gluster
10:42 kovshenin joined #gluster
10:44 kshlm lmkone, Could you describe your environment and test a little more?
10:46 kovshenin joined #gluster
10:47 mowntan joined #gluster
10:48 kovshenin joined #gluster
10:48 paul98 joined #gluster
10:48 prasanth_ joined #gluster
10:51 lmkone kshlm - everything was working ok (maybe it was not lighting speed but I was fine with 100MB/s) but yesterday I had to reboot one of the servers (I have got a 2 server replica config) after which my clients could no longer access the data. After some digging and help from jiffin I decided to reset the config and try again. Volumes started ok but I am getting 50B/s write speed.
10:53 kshlm When you say reset the config, what did you do? Did you do a `gluster volume reset` and or did you delete/recreate that volume?
10:54 kshlm Or did you just cleanup /var/lib/glusterd on both nodes and start fresh.
10:54 kshlm 50B/s is not normal
10:54 lmkone i cleaned up the /var/lib/glusted config
10:55 lmkone hold on - i have changed the mtu on the interface. Might that be the cause?
10:55 paul98 ndevos: sorry i mean tot say bricks, i had lun's on my brain. ok makes sense. just i'm sort of forward planning as atm i've just done one brick for the full 5tb , but want to re do it and spilt the data up etc.
10:55 kshlm how low did you set the mtu to?
10:55 lmkone ive set it to 9000
10:56 kshlm That should actually help increase perf.
10:57 lmkone changed it back to 1500 now on both of the nodes and I am back at 115MB/s
10:57 lmkone strange
10:58 kshlm What sort of I/O were you doing?
10:58 kshlm Was it small or large?
10:58 lmkone large file
10:58 lmkone around 200M
10:59 kshlm That should have ideally made good use of the large MTU.
10:59 lmkone it should, do not know what it did not
11:00 kshlm Googling about mtu 9000 says that not all devies support these Jumbo frames.
11:00 kshlm Could be an issue there.
11:01 jlockwood joined #gluster
11:01 kshlm For jumbo frames to work, all devices on the network needs to support it.
11:01 haomaiwang joined #gluster
11:04 DV_ joined #gluster
11:05 lmkone and this is the case all devices are setup and 9000 mtu capable
11:06 atalur joined #gluster
11:10 Saravanakmr joined #gluster
11:11 hi11111 joined #gluster
11:15 B21956 joined #gluster
11:16 raghug joined #gluster
11:17 jlockwood joined #gluster
11:17 pur joined #gluster
11:19 DV__ joined #gluster
11:21 mchangir joined #gluster
11:22 julim joined #gluster
11:27 jlockwood joined #gluster
11:27 nehar joined #gluster
11:35 kkeithley_ joined #gluster
11:37 rwheeler joined #gluster
11:38 johnmilton joined #gluster
11:39 ctria joined #gluster
11:41 itisravi joined #gluster
11:42 post-factum had to do for i in $(pgrep glusterfs); do echo '-17' >/proc/${i}/oom_score_adj; done on mailserver because fcking exim :(
11:46 hybrid512 joined #gluster
11:47 jgjorgji so i  have a rebalance that's been going on fora day and i'm pretty sure it's stuck, i added a hot tier and rebalanced
11:52 shubhendu joined #gluster
11:59 ctria joined #gluster
11:59 MikeLupe joined #gluster
12:01 haomaiwang joined #gluster
12:05 raghug joined #gluster
12:09 spalai joined #gluster
12:11 level7 joined #gluster
12:19 nehar joined #gluster
12:20 amye joined #gluster
12:22 mbukatov joined #gluster
12:24 unclemarc joined #gluster
12:25 DV_ joined #gluster
12:27 aravindavk joined #gluster
12:30 DV_ joined #gluster
12:35 MikeLupe joined #gluster
12:41 jgjorgji joined #gluster
12:41 Slashman joined #gluster
12:42 karthik___ joined #gluster
12:43 ctria joined #gluster
12:47 micke joined #gluster
12:48 spalai left #gluster
12:57 julim joined #gluster
12:59 kotreshhr left #gluster
13:01 shyam joined #gluster
13:01 jiffin joined #gluster
13:07 mpietersen joined #gluster
13:13 MikeLupe joined #gluster
13:13 nbalacha joined #gluster
13:24 spalai joined #gluster
13:25 mowntan joined #gluster
13:25 mowntan joined #gluster
13:29 gem joined #gluster
13:30 scuttle|afk joined #gluster
13:31 haomaiwang joined #gluster
13:35 ndarshan joined #gluster
13:39 MikeLupe joined #gluster
13:46 kpease joined #gluster
13:52 mchangir joined #gluster
14:01 haomaiwang joined #gluster
14:02 kdhananjay joined #gluster
14:03 darks1de joined #gluster
14:03 kdhananjay ping Ulrar
14:03 Ulrar kdhananjay: pong
14:03 darks1de anybody experienced really slow gluster? huge CPU usage as well
14:03 darks1de (handling small files / wordpress)
14:04 Ulrar I just started a "proper" thread on the ML about my problem, that should be better than spamming you, sorry for that :)
14:04 * kdhananjay cannot believe she did not get a message from glusterbot on naked pings
14:04 kdhananjay Ulrar: hi! so i saw your mail.
14:04 kdhananjay Ulrar: thought it would be easier to discuss on IRC :)
14:05 Ulrar kdhananjay: Yep sure
14:05 Ulrar kdhananjay: You have the volume config on my last mail, if that helps
14:05 kdhananjay Ulrar: going through it.
14:07 jgjorgji what's the best way to create a volume with replica 2 where bricks are on different servers but there are 2 bricks per server?
14:07 kdhananjay Ulrar: what does `gluster volume heal gluster info` say?
14:07 kdhananjay Ulrar: this is assuming your setup is still in that state
14:08 armyriad joined #gluster
14:08 jgjorgji meaning disk1 server 1 and disk1 server 2 are replicas but distributed between them and so on
14:09 jiffin jgjorgji: gluster v create replica 2 server1:disk1/<subdir> server2:disk1/<subdir>  server1:disk2/<subdir> server2: disk2/<subdir>
14:09 armyriad joined #gluster
14:09 Ulrar kdhananjay: Right now 0, but earlier I had constantly different shards listed on each node. Let me just start the import again, it should fill back with shards in a few minutes
14:09 jiffin mention volume name after create
14:09 Ulrar But when nothing is going on, it says 0
14:12 kdhananjay Ulrar: wait, one more question: are your clients on the same nodes as the bricks?
14:13 jgjorgji also thoughs about the setup..
14:13 jgjorgji it seems like it provides best redundancy
14:14 Ulrar kdhananjay: they are yes
14:15 Ulrar I have 3 proxmox servers on which I installed the gluster bricks
14:15 Ulrar They have 3 SAS drives + 80 gig ssd cache with hard raid
14:16 kdhananjay Ulrar: ok that explains the constant listing of different shards on each node. pranithk just root-caused this issue today, and the fix is out. it should make it into 3.7.12
14:16 kdhananjay now for the corruption.
14:16 ctria joined #gluster
14:17 Ulrar ha, good to know
14:18 Biopandemic joined #gluster
14:18 kdhananjay Ulrar: could you share the logs in glustershd.log on all the 3 nodes?
14:19 kdhananjay atalur++
14:19 glusterbot kdhananjay: atalur's karma is now 1
14:20 Ulrar kdhananjay: Sure, I'll send that to the mailing list in a minute
14:22 JoeJulian kdhananjay: Do you have the bug id for that?
14:24 plarsen joined #gluster
14:24 Wojtek Is an 512b inode size still needed with the latest Gluster version? The redhat docs says so, but I did a few spot checks on one of my current volumes and I don't exceed 256b
14:24 Ulrar kdhananjay: Just sent
14:24 Ulrar Now waiting for all the dmarc failure mails
14:24 kdhananjay JoeJulian: https://bugzilla.redhat.co​m/show_bug.cgi?id=1335429
14:24 glusterbot Bug 1335429: medium, medium, ---, pkarampu, ASSIGNED , Self heal shows different information for the same volume from each node
14:24 wushudoin joined #gluster
14:32 kdhananjay Ulrar: node 50 has empty shd logs?
14:33 itisravi joined #gluster
14:34 Ulrar Ha, logrotate I think
14:34 Ulrar kdhananjay: I have logs from this morning up to 11:27, nothing after that
14:35 mchangir joined #gluster
14:35 Ulrar kdhananjay: I'm sorry I have an emergency, I'll mail those to you in about an hour
14:35 Ulrar Thanks a lot for your availability anyway !
14:36 kdhananjay Ulrar: ok np! let me see patch logs meanwhile to see if something else went wrong in 3.7.11
14:36 ctria joined #gluster
14:36 kdhananjay Ulrar: we can resume over mail if i'm offline by the time you return.
14:38 nehar joined #gluster
14:39 rafi1 joined #gluster
14:41 level7 joined #gluster
14:45 atalur joined #gluster
14:45 armyriad joined #gluster
14:47 kenansulayman joined #gluster
14:58 shyam joined #gluster
14:59 rafi joined #gluster
15:01 haomaiwang joined #gluster
15:01 atinm joined #gluster
15:02 kkeithley1 joined #gluster
15:04 drowe joined #gluster
15:06 kkeithley2 joined #gluster
15:06 drowe Hello - we have Gluster instances backing webservers, serving WordPress sites (standard LAMP, Ubuntu, Apache, PHP 5.5, MySQL (RDS)) - running on AWS infrastructure.  Interestingly, I'd assume the Gluster instances would be heavy on READs, but they are seemingly heavy on WRITEs (checking via iotop / iostat) - the sites are _fairly_ static with respect to files - is there something else I can look for with Gluster that would indicate
15:06 drowe the higher levels of WRITE traffic?
15:14 spalai joined #gluster
15:25 jlp1 joined #gluster
15:28 level7 joined #gluster
15:28 bennyturns joined #gluster
15:29 haomaiwang joined #gluster
15:37 level7 joined #gluster
15:38 mowntan joined #gluster
15:53 skoduri joined #gluster
15:59 Manikandan joined #gluster
16:01 F2Knight joined #gluster
16:01 haomaiwang joined #gluster
16:02 dlambrig_ joined #gluster
16:04 ashiq joined #gluster
16:06 skylar joined #gluster
16:18 JoeJulian Wojtek: It's never "needed", it's just advised. There are occasions where the xattrs could exceed 256. Performance tests as to whether or not that actually makes a difference have been inconclusive.
16:18 JoeJulian Wojtek: If the xattr data does exceed 256b, it'll just use another inode.
16:24 pur joined #gluster
16:31 dlambrig_ joined #gluster
16:32 level7 joined #gluster
16:33 skylar joined #gluster
16:43 level7 joined #gluster
17:01 haomaiwang joined #gluster
17:02 armyriad joined #gluster
17:06 jgjorgji joined #gluster
17:06 hchiramm joined #gluster
17:10 shubhendu joined #gluster
17:10 skylar joined #gluster
17:21 dblack joined #gluster
17:27 rafi joined #gluster
17:32 bluenemo joined #gluster
17:32 dlambrig_ joined #gluster
17:34 level7_ joined #gluster
17:37 robb_nl joined #gluster
17:37 F2Knight joined #gluster
17:42 the-me joined #gluster
17:44 ashiq joined #gluster
17:49 spalai joined #gluster
17:56 rafi1 joined #gluster
18:01 haomaiwang joined #gluster
18:07 dgandhi joined #gluster
18:12 F2Knight joined #gluster
18:17 julim joined #gluster
18:20 skylar joined #gluster
18:21 skylar joined #gluster
18:25 skylar joined #gluster
19:00 level7 joined #gluster
19:01 haomaiwang joined #gluster
19:06 m0zes joined #gluster
19:09 ctria joined #gluster
19:13 robb_nl joined #gluster
19:13 hagarth joined #gluster
19:33 m0zes joined #gluster
19:34 MugginsM joined #gluster
19:44 gbox kkeithley suggested the glusterfs-coreutils.  Has anyone used them?
19:47 gbox They are more reliable than standard filesystem programs on fusemounts.  Performance is adequate but the code is at the proof of concept level.  For example, gfrm --help shows a recursive flag that is not implemented in the code.
19:49 rwheeler joined #gluster
19:50 gbox I've seen little interest here so perhaps these are POC or used internally at facebook.
19:59 JoeJulian gbox: facebook was using libnfs based tools and developed that gfapi based equivilent. I don't know of anyone else that's using them yet.
20:01 haomaiwang joined #gluster
20:09 hagarth joined #gluster
20:11 gbox JoeJulian: Thanks yeah I saw a great presentation on their setup at the SCALE Conf.  Antfarm, etc.  So far I've found them useful to override stalled fusemounts but they have very limited features.  Gluster's great for using standard tools.
20:23 gbox Regarding gfapi, if I have a multihomed host (on two networks with different IP) then gluster only functions from/on one hostname/IP/side??\\
20:24 JoeJulian Works the same as the fuse mount, ie ,,(mount server)
20:24 glusterbot (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
20:40 BitByteNybble110 joined #gluster
20:45 djmentos joined #gluster
21:01 haomaiwang joined #gluster
21:19 dlambrig_ joined #gluster
21:23 djmentos joined #gluster
21:34 marbu joined #gluster
21:35 csaba joined #gluster
21:36 lkoranda joined #gluster
21:38 robb_nl joined #gluster
21:48 Wojtek JoeJulian: I found an interesting statement on a redhat page: For the default inode size of 256 bytes, roughly 100 bytes of attribute space is available depending on the number of data extent pointers also stored in the inode. The default inode size is really only useful for storing a small number of small attributes. https://access.redhat.com/documentati​on/en-US/Red_Hat_Enterprise_Linux/6/h
21:48 Wojtek tml/Performance_Tuning_Guide/ch07s03s02s02s04.html
21:49 Wojtek I did a getfattr -d -m "" /mnt/data/{file} and I get 177 bytes in that. So that's above the 100 redhat recommends. I'll need to test with the 512 value
21:50 MugginsM joined #gluster
21:50 JoeJulian I always set the inode size to 512. I don't see any value in making it smaller, unless maybe if you're using cheap thumb drives to build a cluster.
21:51 level7 joined #gluster
22:01 haomaiwang joined #gluster
22:15 johnmilton joined #gluster
22:25 Wojtek Default xfs is 256 so that's what we've been using so far. I'm curious to see what perf improvements we'll get with 512.
22:35 MugginsM joined #gluster
22:36 johnmilton joined #gluster
22:37 shyam joined #gluster
23:01 haomaiwang joined #gluster
23:13 amye joined #gluster
23:15 plarsen joined #gluster
23:19 haomaiwang joined #gluster
23:31 haomaiwang joined #gluster
23:37 hackman joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary