Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-08-21

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:44 map1541 joined #gluster
00:47 crag joined #gluster
01:28 masber joined #gluster
01:51 ilbot3 joined #gluster
01:51 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:06 prasanth joined #gluster
02:24 plarsen joined #gluster
02:28 h4rry joined #gluster
03:14 omie888777 joined #gluster
03:15 baojg joined #gluster
03:18 mb_ joined #gluster
03:31 kotreshhr joined #gluster
03:48 h4rry joined #gluster
03:53 skumar joined #gluster
03:55 itisravi joined #gluster
04:06 Guest9038 joined #gluster
04:09 riyas joined #gluster
04:13 dominicpg joined #gluster
04:27 gyadav joined #gluster
04:29 jiffin joined #gluster
04:30 Shu6h3ndu joined #gluster
04:57 nbalacha joined #gluster
04:58 Humble joined #gluster
05:05 kdhananjay joined #gluster
05:05 karthik_us joined #gluster
05:05 karthik_us joined #gluster
05:06 atinmu joined #gluster
05:09 prasanth joined #gluster
05:12 susant joined #gluster
05:17 ndarshan joined #gluster
05:17 apandey joined #gluster
05:17 aravindavk joined #gluster
05:27 hgowtham joined #gluster
05:32 Saravanakmr joined #gluster
05:44 Guest9038 joined #gluster
05:47 atinmu joined #gluster
05:48 prasanth joined #gluster
05:52 apandey_ joined #gluster
05:57 ppai joined #gluster
06:01 rafi1 joined #gluster
06:21 ankitr joined #gluster
06:27 jtux joined #gluster
06:28 atinmu joined #gluster
06:35 prasanth joined #gluster
06:35 Guest9038 joined #gluster
06:36 buvanesh_kumar joined #gluster
06:51 jkroon joined #gluster
06:58 sona joined #gluster
07:01 aravindavk joined #gluster
07:02 msvbhat joined #gluster
07:04 mbukatov joined #gluster
07:14 poornima_ joined #gluster
07:15 Humble joined #gluster
07:29 skoduri joined #gluster
07:37 ivan_rossi joined #gluster
07:40 ivan_rossi left #gluster
07:44 apandey__ joined #gluster
07:59 fsimonce joined #gluster
07:59 ashiq joined #gluster
08:10 aravindavk joined #gluster
08:13 _KaszpiR_ joined #gluster
08:30 itisravi joined #gluster
08:32 rastar joined #gluster
09:03 saintpablo joined #gluster
09:10 saintpablo joined #gluster
09:18 [diablo] joined #gluster
09:18 marbu joined #gluster
09:20 msvbhat joined #gluster
09:24 lkoranda joined #gluster
09:33 csaba joined #gluster
09:39 kenansulayman joined #gluster
09:45 marbu joined #gluster
09:50 lkoranda joined #gluster
09:56 kenansulayman joined #gluster
09:58 csaba joined #gluster
09:59 kotreshhr joined #gluster
10:46 skumar joined #gluster
10:47 bluenemo joined #gluster
10:47 Wizek_ joined #gluster
11:00 jkroon joined #gluster
11:14 apandey joined #gluster
11:18 kotreshhr joined #gluster
11:30 p7mo joined #gluster
11:34 gyadav_ joined #gluster
11:40 baojg joined #gluster
11:45 baojg joined #gluster
11:49 baojg joined #gluster
11:56 social joined #gluster
11:56 Saravanakmr joined #gluster
12:00 baojg_ joined #gluster
12:03 aravindavk joined #gluster
12:03 flachtassekasse joined #gluster
12:09 baojg joined #gluster
12:12 FreezeS joined #gluster
12:17 FreezeS Hi guys! I have an issue with a geo-replicated volume. The master failed, I lost all the data. I replaced the hdd, copied everything from the slave, set the trusted.glusterfs.volume-id, started the volume but now the geo-replication fails (stuck in a loop The above directory failed to sync. Please fix it to proceed further.)
12:17 FreezeS as far as I read, there are some xfs attributes that need to be synced
12:18 FreezeS what is the right procedure for this case? when the master fails
12:32 fsimonce joined #gluster
12:34 FreezeS seems after I deleted everything from the slave it's syncing
12:34 FreezeS was hoping to avoid that as it takes 24 hours to fully sync
12:35 aravindavk joined #gluster
12:38 hosom joined #gluster
12:42 kotreshhr left #gluster
12:43 gospod2 joined #gluster
12:46 shyam joined #gluster
12:51 h4rry joined #gluster
12:52 plarsen joined #gluster
12:56 baojg joined #gluster
12:58 msvbhat joined #gluster
12:58 ahino1 joined #gluster
13:04 jarrpa joined #gluster
13:05 Eilyre joined #gluster
13:06 Eilyre Hello. Is it possible now to set selinux contexts with fuse on glusterfs on patch 3.11? If so, can someone point me to the documentation, I am not able to find it for some reason.
13:08 baojg joined #gluster
13:24 Guest9038 joined #gluster
13:31 susant joined #gluster
13:32 Humble joined #gluster
13:33 skylar joined #gluster
13:37 baojg joined #gluster
13:45 msvbhat joined #gluster
14:14 jarrpa joined #gluster
14:16 nbalacha joined #gluster
14:21 baojg joined #gluster
14:26 ppai joined #gluster
14:37 baojg joined #gluster
14:37 MikeLupe joined #gluster
14:58 farhorizon joined #gluster
15:11 wushudoin joined #gluster
15:15 vbellur joined #gluster
15:19 buvanesh_kumar joined #gluster
15:28 nbalacha joined #gluster
15:30 MrAbaddon joined #gluster
15:45 scpbanx joined #gluster
15:46 scpbanx Hi, I cannot create volume using more than 4444 bricks, is this a known restriction?
15:47 omie888777 joined #gluster
15:49 jiffin joined #gluster
15:53 scpbanx Anyone online?
15:57 kpease joined #gluster
15:58 kkeithley 4444 bricks?
16:01 scpbanx yes
16:01 scpbanx error says exactly that
16:06 _KaszpiR_ joined #gluster
16:21 aravindavk joined #gluster
16:30 ThHirsch joined #gluster
16:33 aravindavk joined #gluster
16:33 msvbhat joined #gluster
16:42 farhorizon joined #gluster
16:44 [diablo] joined #gluster
16:45 ppai joined #gluster
17:02 Shu6h3ndu joined #gluster
17:02 aravindavk joined #gluster
17:04 skumar joined #gluster
17:05 WebertRLZ joined #gluster
17:11 alvinstarr1 joined #gluster
17:17 h4rry joined #gluster
17:21 h4rry joined #gluster
17:22 jiffin joined #gluster
17:36 _KaszpiR_ joined #gluster
17:41 skylar joined #gluster
18:04 skumar joined #gluster
18:05 jiffin joined #gluster
18:05 sona joined #gluster
18:23 skumar joined #gluster
18:25 tannerb3 joined #gluster
18:59 scobanx joined #gluster
19:05 rwheeler joined #gluster
19:48 [diablo] joined #gluster
19:49 MadPsy joined #gluster
19:49 MadPsy joined #gluster
19:50 baojg joined #gluster
19:52 farhorizon joined #gluster
19:54 bartden joined #gluster
19:58 bartden Hi, i have a general question about storage and IOPS. I have a WD disk which delivers 200 IOPS write (according to specs). This disk has XFS with 4K block size. If i would like to run an application which requires 8MB/s throughput, i would require 8/4*1024 = 2048 IOPS. But i’m a bit confused here. When i do a dd write operation onto the disk i get performance like 140MB/s. Why does this not match?
20:00 vbellur bartden: have you tried using o_sync and o_direct in oflag with dd to bypass caching?
20:01 baojg joined #gluster
20:01 bartden i dropped cache before executing dd (echo 3 > /proc/sys/vm/drop_caches)
20:02 vbellur that doesn't help as your writes could be cached in page cache
20:04 bartden ok, so oflag=direct would do?
20:05 hosom Is there a good resource for default config settings for volumes?
20:05 vbellur bartden: oflag=sync would be better to bypass all caches
20:05 hosom I want to see if tuning threading settings improves my performance, but info doesn't seem to print the value set by default
20:05 hosom Makes it hard to know if I'm moving up or down
20:06 vbellur hosom: gluster volume get <volname> all
20:06 hosom Thanks!
20:07 hosom Is there a common cause for the fuse mount to lock under heavy use like an iotest?
20:08 bartden vbellur this gives indeed an entirely different result :)
20:12 vbellur hosom: what do you mean by locking? is the mount completely unresponsive?
20:12 vbellur bartden: a result that you expected theoretically? :)
20:12 hosom Mount is responsive however the directory for the iotest is not
20:13 bartden no, sync give less, direct gives ~2x more
20:13 alvinstarr1 joined #gluster
20:13 bartden but its indeed more realistic
20:14 vbellur hosom: can you not cd to the directory from the same mount or a different mount?
20:14 bartden vbellur but as i understand using sync will provide me more assurance on integrity, but it would cost me a fortune to get the performance i need
20:14 hosom So for mountpoint /foo and test location of /foo/bar, the latter will stop responding, while all other directories on the same mount /foo continue to work
20:15 vbellur bartden: cool, o_direct bypasses the kernel cache and writes to disk cache. If you have battery backed cache in disks, you can mostly survive a power outage.
20:16 bartden ok thx for the info
20:17 vbellur hosom: you might want to check the gluster client logs and/or trigger a statedump to understand more about this
20:17 hosom No dice on the client logs in the default log level, I can try a state dump
20:22 farhorizon joined #gluster
20:30 baojg joined #gluster
20:31 bartden what is a recommended xfs block size to use on SATA disks for gluster bricks
20:37 baojg joined #gluster
20:42 hosom hm... may have found the problem.... extraordinarily low mtu client side may have caused problems on down the line
20:42 baojg joined #gluster
20:46 skumar joined #gluster
20:50 baojg joined #gluster
20:54 baojg joined #gluster
21:07 [diablo] joined #gluster
21:08 baojg joined #gluster
21:15 hosom okay so I don't really know what I'm looking for in the state file
21:16 hosom mtu did not fix the issue though
21:18 omie888777 joined #gluster
21:24 hosom attaching strace to the running glusterfs client just shows a bunch of nanosleep calls
21:25 hosom no io on the bricks at all... seems like the client hits a bug and it just stops sending data...
21:29 baojg joined #gluster
21:35 kpease joined #gluster
21:38 baojg joined #gluster
21:43 baojg joined #gluster
21:50 guhcampos joined #gluster
21:57 baojg joined #gluster
22:10 h4rry joined #gluster
22:31 _KaszpiR_ joined #gluster
22:32 plarsen joined #gluster
22:50 baojg joined #gluster
23:13 baojg joined #gluster
23:20 baojg joined #gluster
23:27 baojg joined #gluster
23:32 baojg joined #gluster
23:37 baojg joined #gluster
23:45 baojg joined #gluster
23:50 baojg joined #gluster
23:54 shyam joined #gluster
23:55 baojg joined #gluster
23:59 baojg joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary