Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 Guest25353 joined #gluster
00:55 amarts joined #gluster
00:59 loadtheacc joined #gluster
01:06 shdeng joined #gluster
01:19 daMaestro joined #gluster
01:22 oajs_ joined #gluster
01:25 vbellur joined #gluster
01:42 derjohn_mob joined #gluster
01:49 ilbot3 joined #gluster
01:49 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:08 KoSoVaR snehring would you mind sharing the specs of the servers you mentioned were originally configured JBOD dispersed?  and what the throughput increase was after converting to RAID dispersed? @ JoeJulian when you say met your personal perf metric, can you share the network config, were you using 10GbE connectivity between nodes?
02:22 kramdoss_ joined #gluster
02:44 ankitr joined #gluster
02:44 Guest41400 joined #gluster
03:05 Gambit15 joined #gluster
03:17 prasanth joined #gluster
03:19 nbalacha joined #gluster
03:26 Prasad joined #gluster
03:30 magrawal joined #gluster
03:42 riyas joined #gluster
03:47 itisravi joined #gluster
03:49 atinm joined #gluster
03:55 skoduri joined #gluster
03:56 Gambit15 joined #gluster
04:03 dominicpg joined #gluster
04:06 gyadav__ joined #gluster
04:21 kramdoss_ joined #gluster
04:29 ppai joined #gluster
04:31 buvanesh_kumar joined #gluster
04:34 k0nsl joined #gluster
04:34 k0nsl joined #gluster
04:34 ankitr joined #gluster
04:39 ankitr joined #gluster
04:44 ankitr joined #gluster
04:48 apandey joined #gluster
04:50 sanoj joined #gluster
05:05 vbellur joined #gluster
05:07 melliott joined #gluster
05:08 Shu6h3ndu joined #gluster
05:10 Philambdo joined #gluster
05:11 skumar joined #gluster
05:14 ndarshan joined #gluster
05:15 rafi joined #gluster
05:23 sona joined #gluster
05:45 amarts joined #gluster
05:47 tjelinek joined #gluster
05:48 hgowtham joined #gluster
05:49 aravindavk joined #gluster
05:49 prasanth joined #gluster
05:56 ankitr joined #gluster
05:57 ankitr joined #gluster
06:00 rastar joined #gluster
06:09 Saravanakmr joined #gluster
06:09 kramdoss_ joined #gluster
06:16 kdhananjay joined #gluster
06:17 gyadav_ joined #gluster
06:20 jiffin joined #gluster
06:24 derjohn_mob joined #gluster
06:29 sona joined #gluster
06:31 skoduri joined #gluster
06:35 gyadav__ joined #gluster
06:37 poornima_ joined #gluster
06:44 Karan joined #gluster
07:03 msvbhat joined #gluster
07:06 ayaz joined #gluster
07:07 Saravanakmr joined #gluster
07:08 sbulage joined #gluster
07:12 rafi joined #gluster
07:13 karthik_us joined #gluster
07:15 kramdoss_ joined #gluster
07:22 mbukatov joined #gluster
07:23 fsimonce joined #gluster
07:30 MrAbaddon joined #gluster
07:38 shdeng joined #gluster
07:39 rwheeler joined #gluster
07:49 john1 joined #gluster
07:53 melliott joined #gluster
08:03 nishanth joined #gluster
08:13 derjohn_mob joined #gluster
08:25 glisigno1i joined #gluster
08:28 kdhananjay joined #gluster
08:32 glisigno1i Hi, I was wondering what would cause gluster to perform 'excessive' dns queries. I've got a 30 node cluster with 10 bricks on each and sometimes I'm getting a burst of over 70,000 queries
08:36 derjohn_mob joined #gluster
08:46 jiffin1 joined #gluster
08:49 atinm joined #gluster
08:51 itisravi joined #gluster
09:00 FuzzyVeg joined #gluster
09:05 glisigno1i left #gluster
09:08 k0nsl joined #gluster
09:08 k0nsl joined #gluster
09:11 kdhananjay joined #gluster
09:18 hgowtham joined #gluster
09:20 apandey joined #gluster
09:31 amarts joined #gluster
09:35 Wizek_ joined #gluster
09:45 kramdoss_ joined #gluster
09:49 apandey_ joined #gluster
09:54 k0nsl joined #gluster
09:54 k0nsl joined #gluster
09:56 Philambdo joined #gluster
09:56 MrAbaddon joined #gluster
10:03 jiffin joined #gluster
10:10 MrAbaddon joined #gluster
10:12 hgowtham joined #gluster
10:16 apandey joined #gluster
10:21 amarts joined #gluster
10:23 flying joined #gluster
10:25 msvbhat joined #gluster
10:25 FuzzyVeg joined #gluster
10:26 legreffier joined #gluster
11:03 amarts joined #gluster
11:14 sgoodliff78 joined #gluster
11:16 sgoodliff78 Hi, does anyone know how i could change a default gluster volume setting ?
11:17 Philambdo left #gluster
11:17 sgoodliff78 i need to set performance.write-behind off but dont want to do it for each volume as they are getting created dynamically
11:22 kramdoss_ joined #gluster
11:24 jiffin joined #gluster
11:29 jiffin joined #gluster
11:30 bartden joined #gluster
11:34 jiffin joined #gluster
11:35 bartden Hi, whenever i try to create a mkfifo on a gluster volume via the client i get permission denied?
11:36 jiffin joined #gluster
11:53 jiffin joined #gluster
12:08 Intensity joined #gluster
12:09 janlam7 joined #gluster
12:14 msvbhat joined #gluster
12:27 jiffin joined #gluster
12:28 ira joined #gluster
12:31 MrAbaddon joined #gluster
12:32 nbalacha joined #gluster
12:49 baber joined #gluster
12:50 flying joined #gluster
12:51 shyam joined #gluster
13:08 squizzi joined #gluster
13:09 buvanesh_kumar joined #gluster
13:27 plarsen joined #gluster
13:28 sbulage left #gluster
13:29 skylar joined #gluster
13:32 amarts joined #gluster
13:33 rastar joined #gluster
13:34 buvanesh_kumar_ joined #gluster
13:55 pdrakeweb joined #gluster
14:01 snehring KoSoVaR: 2x Xeon E5-2620v4, ~256G RAM, 36 bay supermicro 4U chassis, 10TB HGST SAS3 drives. 10G networking to clients and 10G dedicated internode network both networks 2x bonded 10G interfaces. Actual throuput increase was maybe 50-100MB/s, where the real boost was directory ops. Went from timing out on certain directories when under load to returning within 8 seconds max.
14:02 pdrakewe_ joined #gluster
14:10 rastar joined #gluster
14:19 atinm joined #gluster
14:19 jiffin1 joined #gluster
14:23 jiffin joined #gluster
14:27 jiffin joined #gluster
14:32 pdrakeweb joined #gluster
14:38 jiffin joined #gluster
14:42 Jmainguy joined #gluster
14:42 Jmainguy ever seen the port listed in gluster volume status not match what it is listening on in reality?
14:43 Jmainguy https://paste.fedoraproject.org/paste/4ARIf6N41iVhIFHSkZrOF15M1UNdIGYhyRLivL9gydE=
14:43 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
14:44 farhorizon joined #gluster
14:45 jiffin joined #gluster
14:49 kpease joined #gluster
15:05 Jmainguy specifiny a ipv4 address for my other peer in /etc/hosts fixed it
15:05 Jmainguy it was trying to go over ipv6 before
15:08 wushudoin joined #gluster
15:09 wushudoin joined #gluster
15:09 kraynor5b1 joined #gluster
15:14 askz @paste
15:14 glusterbot askz: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
15:14 askz http://termbin.com/53ka
15:15 askz Hi I have two gluster and one brick replicated accross each, the problem is on my second node, there's large difference between whats in the brick and what is actually mounted (on the same host)
15:16 askz web2 is nice, web3 is actually failing... http://termbin.com/53ka
15:16 askz any ideas ?
15:22 vbellur joined #gluster
15:24 farhorizon joined #gluster
15:24 farhorizon joined #gluster
15:26 openevents joined #gluster
15:26 openevents Hello,
15:27 openevents I have a question about gluster SETATTR I'm not sure, what this does exactly (I want to find why i have Operation not permitted on 3 files)
15:28 atinm joined #gluster
15:30 openevents does a file with 660 chmod can cause SETATTR operation not permitted ?
15:38 derjohn_mob joined #gluster
15:39 Asako joined #gluster
15:39 Asako hello.  Is there anything special I have to do to upgrade from gluster 3.8 to 3.10?
15:40 Asako or should I just leave it alone/
15:41 jeffspeff joined #gluster
15:44 Saravanakmr joined #gluster
15:48 pdrakeweb joined #gluster
15:49 msvbhat joined #gluster
15:53 flying joined #gluster
15:57 KoSoVaR snehring can you confirm you were software raiding (zfs) vs. raid controller raid6?  just from your comments above  wanted to make sure.  Also with that configuration what is your throughput ?
16:05 snehring KoSoVaR: yes zfs. 100-200MB/s 128k writes (depending on load), around 400-500MB/s 128k reads (depending on load again). Currently copying data from another appliance and georepping so It's not exactly idle.
16:07 askz Hi do you have some advices for manual resync ?
16:07 askz for replicas node
16:09 Saravanakmr joined #gluster
16:12 KoSoVaR snehring how many nodes if you don't mind?  sorry, but i'm trying to build this exact thing right now :p
16:12 snehring KoSoVaR: no problem, 6 nodes total
16:13 KoSoVaR thanks man
16:14 KoSoVaR have you been able to tell if the bottlenecks are the disks, cpu, ram?  like are you IO bound in any waywith that config .. i.e. cpu load from erasure coding?
16:14 KoSoVaR if any bottlencks*
16:14 snehring so cpu load from EC seemed to be a major issue with our original jbod config
16:15 snehring with georep running and copying data onto the volume loads were in the 100s
16:15 snehring now we're around 12-22
16:16 snehring also memory usage could get a bit nuts
16:17 snehring 100-130G usage was pretty normal with georep, occasionally it would go out of control and eat all the ram and result in us having to use ipmi to reset the node
16:17 snehring now we're at 30G
16:17 snehring 7.3 of that is presently zfs arc
16:17 JoeJulian Asako: Just do what it says in the upgrade guide: https://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/
16:17 glusterbot Title: Upgrade to 3.10 - Gluster Docs (at gluster.readthedocs.io)
16:18 Asako JoeJulian: thanks
16:18 snehring if I'd had my druthers we would have gotten slightly beefier cpus
16:18 Asako :q
16:19 snehring but reducing brick count for the volume seemed to solve or at least alleviate many of the issues we had
16:20 JoeJulian openevents: no, there's no filesystem permissions that should prevent glusterfsd (which runs as root) from setting attributes. The possibilities include the file or directory not existing, the filesystem not supporting xattrs, and selinux.
16:21 JoeJulian sgoodliff78: Look at the hooks directory under /var/lib/glusterd
16:22 JoeJulian askz: Are you sure replicate isn't just creating sparse files? Check with du --apparent
16:24 sgoodliff78 JoeJulian thanks, ill have a look
16:28 sgoodliff78 JoeJulian: looks like i can use http://blog.gluster.org/2013/11/effective-glusterfs-monitoring-using-hooks-2/ as a basis
16:40 vbellur joined #gluster
16:46 gyadav__ joined #gluster
16:54 Saravanakmr joined #gluster
16:56 openevents JoeJulian
16:58 openevents JoeJulian: Thanks for your answer. That's happend only on 3 files Wordfence (a Wordpress plugin). Wordpress, modifying that 3 files sometimes, and at same time I got SETATTR errors on replicated nodes
16:58 openevents but mounted node is ok
17:00 riyas joined #gluster
17:03 amarts joined #gluster
17:08 tjelinek1 joined #gluster
17:08 pdrakeweb joined #gluster
17:10 skylar joined #gluster
17:13 Gambit15 joined #gluster
17:20 gyadav__ joined #gluster
17:25 rafi1 joined #gluster
17:25 cholcombe joined #gluster
17:34 farhorizon joined #gluster
17:39 pdrakeweb joined #gluster
17:53 vbellur joined #gluster
17:54 pdrakeweb joined #gluster
18:02 pdrakeweb joined #gluster
18:05 Shu6h3ndu joined #gluster
18:09 kramdoss_ joined #gluster
18:10 eldritch_ joined #gluster
18:25 Wizek_ joined #gluster
18:34 john51 joined #gluster
18:37 wellr00t5d joined #gluster
18:38 nobody482 joined #gluster
18:41 fsimonce joined #gluster
18:41 eryc joined #gluster
18:42 Vapez joined #gluster
18:42 Vapez joined #gluster
18:42 cliluw joined #gluster
18:59 skylar joined #gluster
19:01 farhoriz_ joined #gluster
19:18 vbellur joined #gluster
19:34 jiffin joined #gluster
19:39 guhcampos joined #gluster
19:52 vbellur joined #gluster
20:19 baber joined #gluster
20:19 david joined #gluster
20:19 david la
20:19 david hola
20:22 Guest87712 nickname david
20:23 major JoeJulian, there is the immutable permission on ext2/3/4 which will stop root ..
20:24 JoeJulian Thanks
20:24 major "A file with the 'i' attribute cannot be modified: it cannot be deleted or renamed, no link can  be  created  to  this  file and no data can be written to the file.  Only the superuser or a process possessing the CAP_LINUX_IMMUTABLE capability can set or clear this attribute.
20:25 JoeJulian But that's not producing those logs. That's because one client's creating a file and before it can change it, another client's replacing it - I bet.
20:25 JoeJulian Or locks
20:25 major yah .. I was mostly thinking of blocking root
20:25 major have to go out of your way to enable the immutable bit
20:33 vbellur joined #gluster
20:34 vbellur joined #gluster
20:37 jockek joined #gluster
20:37 vbellur joined #gluster
20:37 jiffin joined #gluster
20:38 vbellur joined #gluster
20:43 kraynor5b joined #gluster
20:52 ksj is it possible to combine a volume and a brick into a replicated volume?
20:57 askz JoeJulian: sparse file mean large number right ? So yes I think the replica is creating sparse files
20:58 JoeJulian ksj: If you have a distribute volume and you want to make a distributed replica volume, you can add the same number of bricks as you already have while specifying "replica 2" during the add.
20:58 JoeJulian @google sparse files
20:58 glusterbot JoeJulian: Sparse file - Wikipedia: <https://en.wikipedia.org/wiki/Sparse_file>; Sparse Files (Windows) - MSDN - Microsoft: <https://msdn.microsoft.com/en-us/library/windows/desktop/aa365564(v=vs.85).aspx>; Sparse files – what, why, and how | UNIX Administratosphere: <https://administratosphere.wordpress.com/2008/05/23/sparse-files-what-why-and-how/>; Sparse file - ArchWiki:
20:58 glusterbot JoeJulian: <https://wiki.archlinux.org/index.php/sparse_file>; NTFS Sparse Files (NTFS5 only) - NTFS.com: <http://www.ntfs.com/ntfs-sparse.htm>; Sparse files - IBM: <https://www.ibm.com/support/knowledgecenter/en/SSGSG7_6.4.0/com.ibm.itsm.client.doc/c_bac_sparsefile.html>; Sparse files - IBM: <https://www.ibm.com/support/knowledgecenter/en/ssw_aix_72/com.ibm.aix.osdevice/sparsefiles.htm>; (1 more message)
20:58 JoeJulian Wow, that was way more than I was expecting.
20:58 JoeJulian The first link is a good one though.
20:59 JoeJulian @lucky sparse file
20:59 glusterbot JoeJulian: https://en.wikipedia.org/wiki/Sparse_file
20:59 JoeJulian Oh, cool.. that was broken last time I tried it.
20:59 askz thanks so if I understand right, this isn't a problem?
20:59 JoeJulian It may not be.
20:59 JoeJulian Having sparse files is not a problem.
20:59 askz ahah. how do i check? diff? rsync?
21:00 JoeJulian du can show you the apparent use or the actual use.
21:00 JoeJulian You, of course, are interested in seeing if the apparent use matches on both bricks.
21:01 askz the last number is the total right? when du finishes?
21:03 JoeJulian right
21:03 askz this isn't matching
21:04 askz 29420533 for web2 226915 for web3
21:04 askz (the last is the failing one)
21:04 JoeJulian And what was the command you used?
21:04 askz du --apparent /srv/gluster/home/www
21:04 askz on the two
21:05 JoeJulian Well that's a bit of a discrepancy...
21:05 JoeJulian gluster volume heal info $vol doesn't show heals pending?
21:06 askz hmm yup, lot of thing
21:06 askz +s
21:07 JoeJulian So, two possibilities. 1. it's healing and it's not done yet. 2. it's not healing because of reasons.
21:07 JoeJulian If 2, the most common is firewalls
21:08 JoeJulian second most is selinux
21:09 askz ahhh also, I noticed recently (this wasn't the case before, and it was also failing) that web2 is saying things like this :http://termbin.com/1fw3
21:09 Acinonyx joined #gluster
21:09 askz But I don't have firewalls on those machines nor selinux enabled
21:10 nathwill joined #gluster
21:10 askz and netstat -tlpe is actually returning that gluster listens on those ports
21:10 JoeJulian Try to see if nc can make tcp connections
21:11 askz refused
21:12 JoeJulian What distro/version?
21:13 askz debian 8.2
21:13 askz and I reach ssh from the internal network without trouble
21:14 JoeJulian I would double check iptables-save to make sure it's empty (since you said you expect it to be)
21:14 askz from localhost it's refused too...
21:14 JoeJulian ok, that makes no sense
21:14 askz I was thinking the same
21:15 JoeJulian If ss says it's listening, only iptables could interfere with that.
21:15 askz http://termbin.com/wl6d
21:16 JoeJulian I guess fdb could, too, but I don't know of any tool that would mess with that.
21:17 askz hmmmmmmmm. If I nc on public ip nc is waiting
21:18 askz option transport.tcp.bind-address 10.5.64.39
21:18 JoeJulian You can up the verbosity on nc and see if the connection is established. If it does, nothing happens.
21:18 askz testing this
21:18 JoeJulian Did you set that?
21:18 JoeJulian That'll break things
21:18 askz ahw ok.
21:18 askz I set that before for testing purposes
21:19 askz Maybe this is why....
21:20 askz you got me ><
21:21 JoeJulian Heh
21:22 askz what did I broke up?
21:22 JoeJulian Things like the self-heal daemon connect over localhost.
21:22 askz is there any chance to recover it?
21:22 MrAbaddon joined #gluster
21:22 JoeJulian If you aren't listening on localhost, heals will never happen.
21:22 askz I see..
21:23 JoeJulian Sure, just remove that option.
21:23 JoeJulian Depending on where you set it, you'll need to restart one thing or another.
21:24 JoeJulian Where did you set that option?
21:24 askz glusterd.vol
21:24 JoeJulian /etc/glusterfs/glusterd.vol, I assume.
21:25 askz yup
21:25 JoeJulian In which case after changing it you need to restart glusterd.
21:25 JoeJulian That /should/ be sufficient
21:25 JoeJulian and it should not interrupt your volume use
21:26 askz its not the case
21:26 askz localhost refused on both machines
21:26 vbellur joined #gluster
21:27 askz really odd.
21:27 vbellur joined #gluster
21:29 askz the only temp fix I see is to set public ips in the /etc/hosts file?
21:30 vbellur joined #gluster
21:32 vbellur joined #gluster
21:32 askz and its not resolving the localhost issue anyway
21:34 JoeJulian Check /var/lib/glusters/vols/$volname/*.vol and make sure that bind-address didn't get into any of them.
21:36 askz it is the case in web3, I removed the line, in two files and restarted
21:37 askz still failing
21:37 vbellur joined #gluster
21:44 baber joined #gluster
21:45 JoeJulian please show me gluster volume info
21:47 askz http://termbin.com/1ocd
21:50 askz running 3.10.1 for info
21:51 JoeJulian Strange... I wonder how that got in there then. That didn't used to carry through.
21:52 askz the two nodes are listening on public ip only. any option to change that?
21:52 askz ah, web3 is on local ip 10.xx and web3 on public ip
21:53 JoeJulian Everything should be listening on 0.0.0.0
21:53 askz and port 24007 on *
21:53 JoeJulian If it's not, I don't know how you did that.
21:53 askz from localhost I can connect to 24007 but not to 49152
21:53 askz we're two ahah
21:54 JoeJulian If there's no bind-address in any file under /var/lib/glusters/vols and there was before, you'll need to stop the volume and start it again.
21:54 askz ah.
21:55 askz 0-www-shared-client-1: connection attempt on 127.0.1.1:24007 failed, (Invalid argument)
21:55 askz still
21:55 askz but netstat is showing good things now
21:56 askz what is invalid argument ?
22:01 JoeJulian @lucky man 2 socket
22:01 glusterbot JoeJulian: https://linux.die.net/man/2/socket
22:01 JoeJulian Unknown protocol, or protocol family not available.
22:01 JoeJulian or
22:01 askz ah. thanks
22:01 JoeJulian Invalid flags in type
22:02 askz I see
22:02 JoeJulian The latter is pretty unlikely.
22:02 JoeJulian The former /may/ be a debian thing. Don't they have some pretty unique network defaults?
22:03 askz hmm dont know about that. you're thinking of interface default configuration? in kernel maybe?
22:03 JoeJulian yeah
22:03 JoeJulian I've never run debian
22:04 askz dont really know anything about those.
22:04 askz never heard of
22:04 askz I do sysctl tuning in fact, but on those servers I don't
22:04 JoeJulian I've not heard of anybody else reporting this trouble though. Seems like someone should have if it's that difficult.
22:05 askz right, but this was working before.
22:05 askz I just booted the nodes, they were shutdown for two weeks
22:13 d-fence joined #gluster
22:14 askz so your point is that it /should/ work and this is may be a bug related to my distro?
22:16 varesa joined #gluster
22:17 DJClean joined #gluster
22:20 lalatenduM joined #gluster
22:21 PotatoGim joined #gluster
22:21 billputer joined #gluster
22:22 Chinorro joined #gluster
22:23 askz lulz. rebooted the node and this now healing
22:24 askz really odd.
22:26 askz I can eat my pizza now :D. Merci beaucoup for your time JoeJulian  :)
22:29 farhorizon joined #gluster
22:30 JoeJulian You're welcome
22:34 kraynor5b1 joined #gluster
22:42 derjohn_mob joined #gluster
23:11 vbellur joined #gluster
23:30 mb joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary