Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-12-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 cjellick_ joined #gluster
00:01 haomaiwa_ joined #gluster
00:03 cjellick joined #gluster
00:05 cjellick joined #gluster
00:09 cjellick_ joined #gluster
00:18 delhage joined #gluster
00:23 dgandhi joined #gluster
00:23 EinstCrazy joined #gluster
00:25 mlncn joined #gluster
00:28 mlncn joined #gluster
00:32 mlncn joined #gluster
00:33 Telsin joined #gluster
00:33 mlncn joined #gluster
00:48 primusinterpares joined #gluster
00:53 harish_ joined #gluster
00:55 delhage joined #gluster
00:57 prg3 left #gluster
00:58 EinstCrazy joined #gluster
01:01 haomaiwa_ joined #gluster
01:04 amye joined #gluster
01:28 julim joined #gluster
01:28 B21956 joined #gluster
01:31 mlncn joined #gluster
01:32 Lee1092 joined #gluster
01:34 mlncn joined #gluster
01:36 mlncn joined #gluster
01:40 chirino joined #gluster
01:40 haomaiwa_ joined #gluster
01:46 mlncn joined #gluster
01:48 mlncn joined #gluster
01:51 B21956 joined #gluster
02:09 plarsen joined #gluster
02:16 suliba joined #gluster
02:22 overclk joined #gluster
02:24 suliba joined #gluster
02:26 mlncn joined #gluster
02:28 mlncn joined #gluster
02:29 haomaiwa_ joined #gluster
02:33 suliba joined #gluster
02:36 mlncn joined #gluster
02:38 mlncn joined #gluster
02:39 rideh joined #gluster
02:39 jmarley joined #gluster
02:40 suliba joined #gluster
02:46 mlncn joined #gluster
02:48 spalai joined #gluster
02:48 cjellick joined #gluster
02:50 chirino joined #gluster
02:50 mlncn joined #gluster
02:52 auzty joined #gluster
02:52 suliba joined #gluster
02:52 spalai left #gluster
02:55 spalai joined #gluster
02:56 mlncn joined #gluster
02:57 gildub joined #gluster
02:58 mlncn joined #gluster
03:01 haomaiwa_ joined #gluster
03:07 prg3 joined #gluster
03:13 kdhananjay joined #gluster
03:20 rideh joined #gluster
03:21 EinstCrazy joined #gluster
03:23 rideh joined #gluster
03:27 nbalacha joined #gluster
03:30 mlncn joined #gluster
03:32 mlncn joined #gluster
03:34 nishanth joined #gluster
03:34 mlncn joined #gluster
03:36 mlncn joined #gluster
03:38 atinm joined #gluster
03:44 Peppard joined #gluster
03:48 mlncn joined #gluster
03:50 suliba joined #gluster
03:50 ppai joined #gluster
03:50 mlncn joined #gluster
03:56 nangthang joined #gluster
04:02 calavera joined #gluster
04:02 amye joined #gluster
04:02 suliba joined #gluster
04:03 kanagaraj joined #gluster
04:03 itisravi joined #gluster
04:04 mlncn joined #gluster
04:07 suliba_ joined #gluster
04:08 RameshN joined #gluster
04:10 vmallika joined #gluster
04:12 cjellick joined #gluster
04:12 chirino joined #gluster
04:20 mlncn joined #gluster
04:21 hagarth joined #gluster
04:22 mlncn joined #gluster
04:23 haomaiwa_ joined #gluster
04:25 kotreshhr joined #gluster
04:29 kdhananjay joined #gluster
04:29 RameshN joined #gluster
04:29 jiffin joined #gluster
04:39 haomaiw__ joined #gluster
04:39 64MAAHQJJ joined #gluster
04:40 18WABIL3C joined #gluster
04:41 17WABA6HN joined #gluster
04:42 chirino joined #gluster
04:42 haomaiwang joined #gluster
04:43 haomaiwa_ joined #gluster
04:44 haomaiwa_ joined #gluster
04:45 64MAAHQNB joined #gluster
04:46 64MAAHQNQ joined #gluster
04:46 mlncn joined #gluster
04:47 chirino joined #gluster
04:47 17WABA6KJ joined #gluster
04:48 calavera joined #gluster
04:48 18WABIL6U joined #gluster
04:48 mlncn joined #gluster
04:48 nehar joined #gluster
04:49 17WABA6LM joined #gluster
04:50 haomaiwang joined #gluster
04:51 17WABA6MR joined #gluster
04:52 haomaiwa_ joined #gluster
04:52 mlncn joined #gluster
04:53 haomaiwang joined #gluster
04:54 mlncn joined #gluster
05:07 calavera joined #gluster
05:07 poornimag joined #gluster
05:09 kshlm joined #gluster
05:11 pppp joined #gluster
05:12 hgowtham joined #gluster
05:12 chirino joined #gluster
05:16 skoduri joined #gluster
05:18 arcolife joined #gluster
05:20 raghu joined #gluster
05:22 rjoseph joined #gluster
05:22 Manikandan joined #gluster
05:22 Apeksha joined #gluster
05:22 DV joined #gluster
05:27 vmallika joined #gluster
05:28 shubhendu joined #gluster
05:31 javi404 joined #gluster
05:31 cliluw joined #gluster
05:31 kdhananjay joined #gluster
05:32 Bhaskarakiran joined #gluster
05:33 ramteid joined #gluster
05:41 calavera joined #gluster
05:42 suliba joined #gluster
05:48 calavera joined #gluster
05:54 rwheeler joined #gluster
05:58 suliba joined #gluster
06:25 SOLDIERz joined #gluster
06:25 deepakcs joined #gluster
06:26 rafi joined #gluster
06:28 kshlm joined #gluster
06:29 kaushal_ joined #gluster
06:36 atalur joined #gluster
06:37 itisravi_ joined #gluster
06:41 kshlm joined #gluster
06:43 mobaer joined #gluster
06:47 kshlm joined #gluster
06:53 RameshN joined #gluster
06:56 Park joined #gluster
06:56 ppai joined #gluster
06:58 spalai joined #gluster
07:03 Park hi,  is libgfapi thread safe?  why libgfapi-python doesn't release GIL on the blocking functions?
07:08 jwd joined #gluster
07:09 anil joined #gluster
07:12 hagarth joined #gluster
07:17 mhulsman joined #gluster
07:30 jtux joined #gluster
07:31 ramky joined #gluster
07:34 uebera|| joined #gluster
07:34 uebera|| joined #gluster
07:38 suliba joined #gluster
07:43 kdhananjay joined #gluster
07:46 shubhendu joined #gluster
07:50 uebera|| joined #gluster
07:50 uebera|| joined #gluster
07:50 spalai joined #gluster
07:51 suliba joined #gluster
07:56 nangthang joined #gluster
07:57 nbalacha joined #gluster
07:58 suliba joined #gluster
08:07 mobaer joined #gluster
08:10 suliba joined #gluster
08:16 suliba joined #gluster
08:19 RameshN joined #gluster
08:19 ivan_rossi joined #gluster
08:25 suliba joined #gluster
08:25 kanagaraj joined #gluster
08:26 [Enrico] joined #gluster
08:28 itisravi joined #gluster
08:34 fsimonce joined #gluster
08:34 Debloper joined #gluster
08:42 suliba joined #gluster
08:46 nishanth joined #gluster
08:48 vmallika joined #gluster
08:49 shubhendu joined #gluster
08:51 suliba joined #gluster
08:51 RameshN joined #gluster
09:01 Saravana_ joined #gluster
09:03 Saravanakmr joined #gluster
09:10 julim joined #gluster
09:11 spalai joined #gluster
09:17 evgrinsven joined #gluster
09:19 Slashman joined #gluster
09:21 overclk joined #gluster
09:32 harish_ joined #gluster
09:36 arcolife joined #gluster
09:48 evgrinsven joined #gluster
09:54 nishanth joined #gluster
09:58 vmallika joined #gluster
10:01 cliluw joined #gluster
10:06 firemanxbr joined #gluster
10:09 pppp joined #gluster
10:13 ramky joined #gluster
10:17 rjoseph joined #gluster
10:19 evgrinsven joined #gluster
10:40 XpineX joined #gluster
10:41 kkeithley1 joined #gluster
10:42 evgrinsven joined #gluster
10:48 rjoseph joined #gluster
10:49 shubhendu joined #gluster
10:56 [Enrico] joined #gluster
10:58 cliluw joined #gluster
11:00 [Enrico] joined #gluster
11:11 badone joined #gluster
11:13 jad_jay joined #gluster
11:13 jad_jay Ola
11:14 jad_jay How can you expand a replicated volume ?
11:14 jad_jay I saw solution for distributed but none for replicated
11:14 jad_jay I mean replicated only
11:15 msvbhat jad_jay: You mean you want increase the replica count?
11:15 jad_jay Nope to increase the size of the volume
11:15 msvbhat jad_jay: That means you want to convert to distributed-replicated volume
11:15 jad_jay I replicate 100Go, I want to increase this to 150Go for example
11:16 jad_jay Nope
11:16 jad_jay Do i only need to increase the lv wich is beneath ?
11:16 msvbhat In a pure replicated volume, the size of the volume is limited to the brick of smallest size
11:17 msvbhat So when you increase the size of underlysing lv, you increse the size of the volume
11:17 jad_jay In simple So it's good nothing else to do just increase the lv and the fs
11:17 msvbhat But that needs to be done to all bricks
11:17 jad_jay Of course
11:17 msvbhat jad_jay: Yes,
11:18 EinstCrazy joined #gluster
11:18 msvbhat jad_jay: But in the process, make sure not to delete the data on the bricks
11:18 jad_jay Is there a danger ? I mean what if a brick is lower size during 10 minutes
11:19 haomaiwa_ joined #gluster
11:19 msvbhat As long as your volume is out of space (or very near to out of space) and client is pumping data, it shouldn't matter
11:19 msvbhat *not out space
11:20 jad_jay And How to add a member to a replication pool ? gluster volume add-brick replica 4 server4:/serv4
11:20 haomaiwa_ joined #gluster
11:22 jad_jay msvbhat: And how to add a member to a replication pool ? gluster volume add-brick replica 4 server4:/serv4
11:22 msvbhat Yes, That increases the replica  count to 4
11:22 jad_jay Brilliant !
11:22 jad_jay Thanks a lot !
11:22 jad_jay msvbhat: Last one
11:22 haomaiwa_ joined #gluster
11:22 jad_jay wait I need to think about it :)
11:23 haomaiwa_ joined #gluster
11:24 18WABITV2 joined #gluster
11:24 msvbhat :)
11:24 jad_jay msvbhat: Ok I got it :) When you choose the server for the mount, should it be one special among the pool ? or you can choose every one ?
11:25 msvbhat jad_jay: You can choose any one from the cluster
11:25 haomaiwang joined #gluster
11:26 jad_jay So is it a good practice to select one and set a backup volume, and to not take the same first for each client ?
11:26 ppai joined #gluster
11:26 haomaiwa_ joined #gluster
11:27 haomaiwa_ joined #gluster
11:27 hgowtham REMINDER: Gluster Community Bug Triage meeting in ~30 minutes at #gluster-meeting
11:28 overclk joined #gluster
11:28 msvbhat jad_jay: Yeah, If you are going to have more than one client, use different servers for each of them
11:29 jad_jay msvbhat: Glusterfs is thought for sharing data among clients ? right ?
11:29 17WABBDB0 joined #gluster
11:30 jad_jay msvbhat: So maybe it'sq better to use distributed if you have multiple rw clients ?
11:30 msvbhat jad_jay: I didn't understand that
11:30 msvbhat ?
11:30 haomaiwang joined #gluster
11:31 jad_jay msvbhat: I mean gfs is a tool for sharing files between a lot of clients... clients who can read and writes files in it
11:31 jiffin1 joined #gluster
11:31 haomaiwang joined #gluster
11:31 msvbhat jad_jay: I think you can also put it that way, yes
11:32 jad_jay msvbhat: What gfs is good for you ?
11:32 msvbhat It's basically a storage solution.
11:32 jad_jay msvbhat: Ok
11:32 msvbhat jad_jay: You mean which configuration?
11:32 jad_jay msvbhat: no i mean what do you use it for
11:32 haomaiwa_ joined #gluster
11:32 jad_jay msvbhat: I need a solution for webcluster
11:33 jad_jay msvbhat: NFS is a mess
11:33 jad_jay msvbhat: iSCSI is too complex for more than 2 web servers
11:33 64MAAHY54 joined #gluster
11:33 jad_jay msvbhat: NFS is very poor performance, it's no good for High RW operations
11:34 haomaiwa_ joined #gluster
11:34 jad_jay msvbhat: So I try to find a good solution for sharing files which are centralized, and with a good rw performance
11:35 msvbhat You should be able to use it for webcluster
11:35 jad_jay msvbhat: Because RSYNC scripts are driving me nuts
11:35 jad_jay msvbhat: Is it a good idea to put the brick in the webservers ?
11:36 jad_jay msvbhat: I mean not to have dedicated vm for gfs ?
11:36 kotreshhr joined #gluster
11:38 msvbhat I'm not sure I follow. You should be able to use glusterfs volume as a storage for webserver
11:40 jad_jay msvbhat: Ok, And I want to put the bricks in the web servers, so in the pool you have serv1 serv2 and serv3 which are also apache servers
11:41 ramky joined #gluster
11:41 haomaiwa_ joined #gluster
11:42 jad_jay Is it a good idea, or is it better to have gfsserv1 gfsserv2 gfsserv3 which are dedicated gfs servers
11:42 21WAAH3OS joined #gluster
11:43 jad_jay msvbhat: Is it a good idea ? Or is it better to have gfsserv1 gfsserv2 gfsserv3 which are dedicated gfs servers
11:44 jad_jay msvbhat: Maybe the gfs server is taking 40% of the CPU and 64% of RAM, and so it is difficult to have webservers running next to it ?
11:45 kotreshhr joined #gluster
11:45 skoduri joined #gluster
11:46 haomaiwa_ joined #gluster
11:47 haomaiwa_ joined #gluster
11:48 21WAAH3TM joined #gluster
11:48 jad_jay msvbhat: So no answers to that :D
11:49 haomaiwa_ joined #gluster
11:51 msvbhat jad_jay: I was away...
11:51 jad_jay msvbhat: :
11:51 jad_jay msvbhat: :)
11:51 jad_jay msvbhat: ^_^
11:51 msvbhat jad_jay: But yeah, I'm not the expert on that. I think it depends on lot of parameters
11:52 haomaiwa_ joined #gluster
11:52 jad_jay Okay we'll try with that
11:52 msvbhat jad_jay: So you can try out both and choose which works best for you?
11:52 jad_jay msvbhat: Yes your answer show that there is no barrier to both solutions
11:52 jad_jay msvbhat: Now the best is testing...
11:53 msvbhat jad_jay: Yes...
11:53 jad_jay msvbhat: Thanks a lot you light a lot of dark corner
11:53 jad_jay msvbhat: Thanks a lot you light a lot of dark corners
11:53 haomaiwang joined #gluster
11:53 jad_jay msvbhat: Have a nice day
11:54 21WAAH3YU joined #gluster
11:54 msvbhat jad_jay: :) Happy to be of help
11:55 haomaiwang joined #gluster
11:56 haomaiwang joined #gluster
11:57 haomaiwang joined #gluster
11:58 haomaiwang joined #gluster
12:00 64MAAHZQK joined #gluster
12:00 jad_jay joined #gluster
12:00 evgrinsven1 joined #gluster
12:01 17WABBDYE joined #gluster
12:02 haomaiwa_ joined #gluster
12:04 haomaiwang joined #gluster
12:06 haomaiwa_ joined #gluster
12:07 21WAAH39X joined #gluster
12:08 14WAAGN09 joined #gluster
12:10 ira joined #gluster
12:10 haomaiwa_ joined #gluster
12:11 haomaiwa_ joined #gluster
12:14 kotreshhr joined #gluster
12:15 haomaiwa_ joined #gluster
12:21 spalai left #gluster
12:21 haomaiwa_ joined #gluster
12:22 14WAAGODK joined #gluster
12:23 haomaiwa_ joined #gluster
12:24 haomaiwa_ joined #gluster
12:25 haomaiwa_ joined #gluster
12:26 18WABIVBV joined #gluster
12:27 haomaiwang joined #gluster
12:28 17WABBEIV joined #gluster
12:30 Gambit15 joined #gluster
12:30 Gambit15 hellooo
12:31 Gambit15 anyone around?
12:31 haomaiwang joined #gluster
12:32 Gambit15 I'm looking for some Ceph v. Gluster advice for a small environment (2-10 racks)
12:32 haomaiwang joined #gluster
12:33 ppai joined #gluster
12:33 14WAAGOMC joined #gluster
12:35 haomaiwang joined #gluster
12:35 ndevos Gambit15: it really depends on the use-case, Ceph is mostly for block-devices, Gluster for filesystem access
12:35 Gambit15 As in iscsi type of thing?
12:36 Gambit15 Because I see it has objects, blocks & an fs
12:36 ndevos I'd compare Ceph with iscsi, and Gluster with NFS
12:36 EinstCrazy joined #gluster
12:36 kotreshhr joined #gluster
12:36 haomaiwang joined #gluster
12:37 Gambit15 And CephFS? No good?
12:37 ndevos for object access, Ceph offers an S3 interface, and Gluster has Swift
12:37 haomaiwa_ joined #gluster
12:38 ndevos CephFS is an addition to Ceph, all I heard is that it is not production ready yet, but several developers are working hard to improve that (and people use it already)
12:38 Saravanakmr joined #gluster
12:38 haomaiwang joined #gluster
12:38 ndevos on the same topic, Gluster can be used to store disk-images just fine too ;-)
12:39 Gambit15 How about redundancy across relatively small stacks?
12:39 jiffin1 joined #gluster
12:39 haomaiwang joined #gluster
12:39 Gambit15 I'd be looking at setting something up between 2 racks of servers, and then extending it beyond as needed
12:40 Gambit15 But I'm slightly concerned that a rack outage could kill the system
12:40 haomaiwa_ joined #gluster
12:40 Gambit15 I know they have slightly different methods, but is one more reliable than the other in this case?
12:41 64MAAH0RR joined #gluster
12:41 Gambit15 OpenStack is also on the cards
12:42 haomaiwang joined #gluster
12:42 Gambit15 Sorry, lots of questions, however I've not been able to find any material which shows where each excels...
12:43 haomaiwang joined #gluster
12:44 haomaiwa_ joined #gluster
12:46 haomaiwang joined #gluster
12:47 rjoseph joined #gluster
12:47 haomaiwa_ joined #gluster
12:48 haomaiwa_ joined #gluster
12:51 tdasilva Gambit15: just my 2 cents, if you are talking about multiple nodes per rack, either system should be fine in terms of redudancy as long as you configure them correctly
12:52 tdasilva like ndevos said, it's really up to what your use case is to determine what system you should be targeting
12:53 Gambit15 Yup, understood. From the looks of it, it's mostly a difference of if I want files (Gluster) or volumes (Ceph)
12:55 Gambit15 Cheers tdasilva & ndevos
12:59 cjellick joined #gluster
13:03 ppai joined #gluster
13:03 chirino joined #gluster
13:03 DV joined #gluster
13:04 Saravanakmr joined #gluster
13:04 RedW joined #gluster
13:16 DV joined #gluster
13:17 jmarley joined #gluster
13:19 rjoseph joined #gluster
13:19 firemanxbr_ joined #gluster
13:22 Saravanakmr joined #gluster
13:22 firemanxbr joined #gluster
13:25 lpabon joined #gluster
13:26 rafi1 joined #gluster
13:31 rafi joined #gluster
13:32 d0nn1e joined #gluster
13:39 evgrinsven joined #gluster
13:44 papamoose joined #gluster
13:44 ekuric joined #gluster
13:59 ahino joined #gluster
14:02 unclemarc joined #gluster
14:02 tapoxi joined #gluster
14:05 evgrinsven left #gluster
14:05 tapoxi hi everyone, I'm new to gluster. in my bricks log I'm seeing a ton of operation not permitted errors like this one, I guess a failure to set extended attributes. any idea what it means? not noticing any issues though. [2015-12-15 13:51:34.047882] I [MSGID: 115072] [server-rpc-fops.c:1773:server_setattr_cbk] 0-gv0-server: 400863: SETATTR /guidedtest462/Storage/database (38990224-d5f1-45af-a1be-c4a1529105f3) ==> (Operation not permitted) [Operation not p
14:22 wtracz2 joined #gluster
14:25 plarsen joined #gluster
14:26 dgandhi joined #gluster
14:27 dgandhi joined #gluster
14:28 dgandhi joined #gluster
14:29 dgandhi joined #gluster
14:30 dgandhi joined #gluster
14:30 shaunm joined #gluster
14:31 hamiller joined #gluster
14:33 B21956 joined #gluster
14:34 arcolife joined #gluster
14:35 ahino joined #gluster
14:37 DV joined #gluster
14:41 tapoxi left #gluster
14:49 bennyturns joined #gluster
14:54 nbalacha joined #gluster
14:57 wtracz2 Hi. Is GlusterFS a good solution for branch office / remote office type requirements? We've got some designers in another location who need to share assets with people and wondering if I can use Gluster to solve the need.
14:58 JonathanD joined #gluster
14:59 hamiller wtracz2, You may need to be a bit more specific. REPLICATED volumes require a 1msec latency between systems and are synchronous. Geo-Replication will tollerate much slower connections, but the slave side is considered 'Read Only' and may actually take minutes to be in sync
14:59 wtracz2 There is RW on both sides so that rules out geo-replication I guess.
15:00 wtracz2 Are you aware of any open source FS / solution that addresses this? There is DFSR for Windows but nothing I can see for Linux.
15:03 cjellick joined #gluster
15:03 coredump joined #gluster
15:04 hamiller wtracz2, What about just using NFS?
15:05 wtracz2 I was looking for some kind of acceleration at the edge nodes as they are on a WAN link. I'll go and bug the #samba guys :)
15:05 hamiller wtracz2, Good Hunting
15:14 sagarhani joined #gluster
15:29 MessedUpHare joined #gluster
15:30 MessedUpHare Hi everyone, I'm getting a reliable crash from the samba-vfs-glusterfs package.
15:31 MessedUpHare When using ctdb to failover virtual ip addresses
15:32 MessedUpHare I get "messaging_reinit() failed: NT_STATUS_INTERNAL_DB_ERROR" from smbd in syslog and it sends the pids to [defunct]
15:32 MessedUpHare Does anyone know a next step I should take to report the issue and/or try to resolve it?
15:44 arcolife joined #gluster
15:51 rjoseph joined #gluster
15:54 nage joined #gluster
15:55 DV joined #gluster
15:55 DV__ joined #gluster
16:00 csaba joined #gluster
16:02 jiffin joined #gluster
16:03 wushudoin joined #gluster
16:04 wushudoin joined #gluster
16:07 Humble joined #gluster
16:09 kotreshhr left #gluster
16:15 bowhunter joined #gluster
16:19 Manikandan joined #gluster
16:20 ivan_rossi left #gluster
16:21 ekuric joined #gluster
16:22 overclk joined #gluster
16:22 cjellick joined #gluster
16:23 klaxa joined #gluster
16:26 Manikandan joined #gluster
16:29 mobaer joined #gluster
16:29 skoduri joined #gluster
16:40 shubhendu joined #gluster
16:42 Intensity Hi. I'm wondering if GlusterFS can be used to do file-level tiering, like btier but on a file level. Or like Apple's fusion drive but for any filesystem.
16:43 tsaavik joined #gluster
16:44 Intensity So if I have /mnt/fast and /mnt/large, I'd like to have /mnt/fusion that keeps popular files on the fast mount point, and optionally an asychnorouns backup on the large mount point.
16:46 tsaavik after upgrading to 3.5.6 my self-heal daemon is no longer listening on a tcp port, is this an issue or expected behaviour?
16:46 tsaavik Self-heal Daemon on localhost    N/A     Y       11331
16:49 ndevos Intensity: yes, see https://github.com/gluster/glusterfs-specs/blob/master/done/Features/tier.md and https://github.com/gluster/glusterfs-specs/blob/master/done/GlusterFS%203.7/Data%20Classification.md
16:49 glusterbot Title: glusterfs-specs/tier.md at master · gluster/glusterfs-specs · GitHub (at github.com)
16:50 ndevos tsaavik: yes, self-heal does not need to listen on a port
16:50 tsaavik ndevos: Thanks!
16:54 rafi joined #gluster
17:00 cjellick_ joined #gluster
17:03 cjellick joined #gluster
17:04 skylar joined #gluster
17:07 DV__ joined #gluster
17:07 cjellick joined #gluster
17:09 wtracz2 joined #gluster
17:12 calavera joined #gluster
17:12 cjellick joined #gluster
17:25 mhulsman joined #gluster
17:37 Gambit15 left #gluster
17:40 DV__ joined #gluster
17:45 cjellick joined #gluster
17:49 kotreshhr joined #gluster
18:11 daMaestro joined #gluster
18:12 kotreshhr joined #gluster
18:16 bennyturns joined #gluster
18:22 tsaavik I seem to have some gfids that are stuck (always show in gluster volume heal gv0 info) whats the best way to deal with them. I'm okay nuking em
18:25 ndevos ~split-brain | tsaavik
18:25 glusterbot tsaavik: To heal split-brains, see https://gluster.readthedocs.org/en/release-3.7.0/Features/heal-info-and-split-brain-resolution/ For additional information, see this older article https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/ Also see splitmount https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
18:26 tsaavik Is it split brain? gluster volume heal gv0 info split-brain shows: Number of entries: 0
18:27 ahino joined #gluster
18:27 JoeJulian @lucky what is this new .glusterfs
18:27 glusterbot JoeJulian: http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Creating_Replicated_Volumes
18:27 JoeJulian pfft... not even close
18:28 ndevos tsaavik: hmm, not sure why certain gfids would always show up in the "gluster volume heal ... info" output
18:29 ndevos oh, maybe that is the command that also shows the log/journal of the healed entries? sorry, I dont remember exactly
18:29 JoeJulian tsaavik: Ok... here's the story with that. See https://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/ for how the gfid directory structure is laid out.
18:29 glusterbot Title: What is this new .glusterfs directory in 3.3? (at joejulian.name)
18:29 tsaavik Cool, thanks for links!
18:30 JoeJulian tsaavik: If the gfid file exists and a stat shows links >= 2, then it's a real file and you can find which file by looking for the inode number on the brick (find -inum)
18:31 tsaavik awesome, will do. I was also playing with https://gist.github.com/semiosis/4392640 (gfid-resolver.sh) before I asked
18:31 glusterbot 'Title: Glusterfs GFID Resolver\r \r Turns a GFID into a real path in the brick \xc2\xb7 GitHub (at gist.github.com)'
18:31 JoeJulian If it doesn't exist, then it's a stale entry under .glusterfs/indices/xattrop
18:37 Rapture joined #gluster
18:41 cjellick_ joined #gluster
18:43 F2Knight joined #gluster
18:50 cjellick joined #gluster
18:57 mobaer joined #gluster
19:00 calavera joined #gluster
19:00 jiffin joined #gluster
19:00 tsaavik okay, I think I found the underlying problem. both my nodes have a peer status of  "State: Sent and Received peer request (Connected)" gonna change state=5 to =3 and restart
19:02 JoeJulian I wish I had a flow chart of how peer states happen.
19:03 tsaavik its suppose to end in "Peer in Cluster (Connected)" right?
19:04 JoeJulian right
19:04 bennyturns joined #gluster
19:05 mbukatov joined #gluster
19:05 bennyturns joined #gluster
19:05 csaba1 joined #gluster
19:08 Intensity ndevos: Cool, thanks for the link there.  I'm wondering if, tiering might be combined one day with redundancy.  Looks like a file either belongs to hot or cold.  But if the storage space of the HD far exceeds that of the SSD, I figure I might as well replicate that file asynchronously (best-effort).  But managing an index and meeting the minimum bar for consistency might require some effort.
19:12 mhulsman joined #gluster
19:12 marbu joined #gluster
19:13 csim_ joined #gluster
19:14 lkoranda joined #gluster
19:23 csim joined #gluster
19:26 csaba1 joined #gluster
19:29 newb-g joined #gluster
19:30 newb-g Hi. Is there a way for gluster to release file locks that held by a disconnected brick?
19:37 cjellick joined #gluster
19:52 cjellick joined #gluster
19:52 Bardack hello , important question:
19:52 Bardack we currently have a gluster 3.5.2 with bricks and bla bla
19:53 Bardack can we mount the bricks to a new server with a gluster 3.7.1 ?
19:53 Bardack and recreate the shares on the bricks with --force
19:53 Bardack will it work ?
19:54 newb-g Is there a way for gluster to release file locks that are held by a disconnected brick?
20:11 DV__ joined #gluster
20:20 mhulsman joined #gluster
20:40 calavera joined #gluster
20:42 newb-g Is there a way for gluster to release file locks that are held by a disconnected brick?
20:44 DaKnObCS joined #gluster
20:51 bluenemo joined #gluster
20:54 DV__ joined #gluster
21:02 bowhunter joined #gluster
21:22 jwang joined #gluster
21:29 B21956 joined #gluster
21:54 tsaavik does gluster volume heal gv0 info split-brain lie? it shows a file split brain, but its identical on both fuse mounts and heal statistics shows "No. of entries in split-brain: 0"
21:55 tsaavik (I used splitmount to fix)
22:03 JoeJulian It's a log
22:03 JoeJulian Note the timestamp.
22:03 JoeJulian tsaavik: ^
22:09 dgandhi joined #gluster
22:16 bennyturns joined #gluster
22:24 DV joined #gluster
22:32 gildub joined #gluster
22:36 DV joined #gluster
22:50 bluenemo joined #gluster
22:54 Bardack is this possible to use geo-replication with servers having different version of glusterfs ?
23:01 JoeJulian Bardack: It should be ok. I haven't noticed anything that should prevent it.
23:02 skylar joined #gluster
23:04 Bardack nice JoeJulian thx
23:05 Bardack we had a cool thing happening today ... :)
23:05 Bardack a share started to have some ???? in permissions and so on …
23:05 Bardack we did a xfs_check
23:05 Bardack which got killed after 2h
23:05 Bardack and bam, our NAS moved from 30TB used to 15TB used
23:05 Bardack nice :)
23:07 JoeJulian Hah, nice.
23:07 zhangjn joined #gluster
23:08 Bardack yeah kinda :)
23:08 Bardack but our nas is on a crappy only 1 VM, running gluster 3.5 … we have thousand of issues with it
23:08 Bardack we are currently redoing a full architecture (a good one this time, with multiple heads/nodes) and gluster 3.7
23:08 Bardack but here we had to find a quick and dirty solution :)
23:09 Bardack other question JoeJulian , activating geo-rep on an existing (and big , and with plenty of small files) volume, should work right the way ?
23:09 Bardack or do we have to think about something ?
23:10 JoeJulian Should be pretty harmless.
23:16 Bardack mkay :)
23:16 Bardack so i can be quite confident :)
23:17 Bardack all volumes are back … pfiou :)
23:17 DaKnObCS joined #gluster
23:40 sacrelege joined #gluster
23:40 sacrelege joined #gluster
23:55 klaxa joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary