Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2018-02-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:26 jkroon joined #gluster
00:40 shyam joined #gluster
01:05 Rakkin joined #gluster
01:17 primehaxor joined #gluster
01:18 primehaxor joined #gluster
01:20 primehaxor joined #gluster
01:22 primehaxor joined #gluster
01:22 primehaxor joined #gluster
01:24 primehaxor joined #gluster
01:25 primehaxor joined #gluster
01:25 primehaxor joined #gluster
01:26 primehaxor joined #gluster
01:28 primehaxor joined #gluster
01:28 shyam joined #gluster
01:29 primehaxor joined #gluster
01:30 primehaxor joined #gluster
01:32 primehaxor joined #gluster
01:32 primehaxor joined #gluster
01:33 primehaxor joined #gluster
01:50 kramdoss_ joined #gluster
02:18 ppai joined #gluster
02:32 kotreshhr joined #gluster
02:56 sheenobu joined #gluster
02:58 ilbot3 joined #gluster
02:58 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:18 atinm_ joined #gluster
03:18 atinm joined #gluster
03:29 bitchecker joined #gluster
03:37 aravindavk joined #gluster
03:42 Vishnu__ joined #gluster
03:43 bitchecker joined #gluster
03:43 Shu6h3ndu joined #gluster
03:44 Vishnu_ joined #gluster
03:45 atinmu joined #gluster
03:49 atinm joined #gluster
03:58 karthik_us joined #gluster
04:09 jiffin joined #gluster
04:12 atinm joined #gluster
04:12 atinm_ joined #gluster
04:18 sunny joined #gluster
04:29 apandey joined #gluster
04:34 skumar joined #gluster
04:43 nbalacha joined #gluster
04:50 Prasad joined #gluster
04:53 rastar joined #gluster
05:36 sanoj_ joined #gluster
05:36 sanoj joined #gluster
05:50 hgowtham joined #gluster
05:55 Saravanakmr joined #gluster
05:59 hgowtham joined #gluster
06:21 owlbot joined #gluster
06:21 xavih joined #gluster
06:24 hgowtham joined #gluster
06:27 nbalacha joined #gluster
06:29 hgowtham joined #gluster
07:24 jtux joined #gluster
07:36 nbalacha joined #gluster
07:52 atinmu joined #gluster
07:54 atinm joined #gluster
08:05 rastar joined #gluster
08:10 aravindavk joined #gluster
08:20 devremiz joined #gluster
08:22 jri joined #gluster
08:22 devremiz hi, i have problem with creating volume with gluster v 3.8
08:23 devremiz volume create: testvol: failed: Staging failed on storage2. Error: Host storage2 is not in ' Peer in Cluster' state
08:23 devremiz i have 3 node storage node
08:24 nbalacha|afk joined #gluster
08:24 devremiz and my etc hosts is
08:24 devremiz 1.2.3.4.5   storage1 1.2.3.4.6   storage2 1.2.3.4.7   storage3
08:25 devremiz peer status look normal
08:26 ivan_rossi joined #gluster
08:29 ivan_rossi left #gluster
08:39 devremiz_ joined #gluster
08:40 risjain joined #gluster
08:42 jri joined #gluster
08:53 jkroon joined #gluster
09:00 jri joined #gluster
09:11 devremiz joined #gluster
09:22 Rakkin joined #gluster
09:42 kotreshhr left #gluster
09:51 Prasad_ joined #gluster
10:30 Prasad__ joined #gluster
10:51 uebera|| joined #gluster
11:03 Sunghost joined #gluster
11:05 Sunghost Hi to all. I have upgrade my distributed glusterfs on 3 nodes from 3.10.x to latest 3.13.2. i mounted on client via nfs and df -h shows enough free diskpace. But if i create an directory i got the message, not enough diskpace left. touch test.txt is ok. any idea?
11:05 kml_fr joined #gluster
11:10 nbalacha|afk joined #gluster
11:12 shyam joined #gluster
11:24 Sunghost Has someone an idea?
11:31 nbalacha|afk joined #gluster
11:35 shellclear joined #gluster
11:41 hchiramm_ joined #gluster
11:43 Vishnu_ joined #gluster
11:43 Klas checked all server nodes?
11:47 Sunghost Yes with volume detail option and on all 3 is enough space left - smallest has over 150gb
11:47 Klas no idea then, sorry
11:50 Sunghost I wonder while it worked before the update to 3.13.x
11:55 nbalacha Sunghost, how large are the individual bricks
11:57 Sunghost 1 and 2 have net ~32TB and 3 ~ 16TB
11:57 Sunghost XFS Filesystem
11:59 Sunghost each node has enough free inodes
12:11 ThHirsch joined #gluster
13:09 buvanesh_kumar joined #gluster
13:15 gospod joined #gluster
13:16 gospod2 joined #gluster
13:17 plarsen joined #gluster
13:17 gospod3 joined #gluster
13:26 buvanesh_kumar joined #gluster
13:32 primehaxor joined #gluster
13:40 kpease joined #gluster
13:46 plarsen joined #gluster
14:00 jstrunk joined #gluster
14:01 baber joined #gluster
14:04 shyam joined #gluster
14:09 baber joined #gluster
14:26 skylar1 joined #gluster
14:32 kramdoss_ joined #gluster
14:39 ThHirsch joined #gluster
14:40 atinmu joined #gluster
14:40 atinm joined #gluster
14:45 nbalacha joined #gluster
14:47 jri joined #gluster
15:08 aravindavk joined #gluster
15:18 rwheeler joined #gluster
15:28 jiffin joined #gluster
15:45 ThHirsch joined #gluster
15:55 rouven joined #gluster
16:19 atinm joined #gluster
16:19 atinmu joined #gluster
16:20 jstrunk joined #gluster
16:31 jstrunk joined #gluster
16:53 kpease joined #gluster
17:12 rouven joined #gluster
17:25 kpease joined #gluster
18:00 cholcombe so this is a silly question but has afr healing changed in the last 2yrs?  there used to be an xattr for trusted.afr.dirty on the brick files but i can't seem to find it in the 3.12 version
18:04 skylar1 joined #gluster
18:19 hvisage joined #gluster
18:22 rwheeler joined #gluster
18:37 jkroon joined #gluster
18:38 phlogistonjohn joined #gluster
18:53 WebertRLZ joined #gluster
19:05 skylar1 joined #gluster
19:09 T-Bone84 joined #gluster
20:19 ThHirsch joined #gluster
20:22 buvanesh_kumar joined #gluster
20:29 baber_ joined #gluster
20:31 Vapez joined #gluster
20:47 TBone84 joined #gluster
20:47 TBone84 joined #gluster
21:08 T-Bone84 joined #gluster
21:08 phlogistonjohn joined #gluster
21:45 skylar1 joined #gluster
21:47 baber_ joined #gluster
22:09 Sunghost joined #gluster
22:10 Sunghost Hi, i have still an problem after upgrading to latest 3.13.x. I cant create directories. Error is "disk full", but i have over 150gb on smallest distributed brick. I need help
22:29 naisanza joined #gluster
23:48 caitnop joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary