Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 tdasilva joined #gluster
00:29 nbarnett joined #gluster
00:33 sputnik13 joined #gluster
00:36 sputnik13 joined #gluster
00:48 barnim joined #gluster
00:49 ira joined #gluster
00:52 zerick joined #gluster
00:53 nishanth joined #gluster
01:24 harish_ joined #gluster
01:24 MacWinner joined #gluster
01:39 msbrogli joined #gluster
01:46 ilbot3 joined #gluster
01:46 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 sputnik13 joined #gluster
01:50 sputnik13 joined #gluster
01:57 overclk joined #gluster
02:01 sputnik13 joined #gluster
02:04 sputnik13 joined #gluster
02:05 haomai___ joined #gluster
02:10 raghug joined #gluster
02:26 decimoe joined #gluster
02:32 ababu joined #gluster
02:41 harish_ joined #gluster
02:42 RameshN joined #gluster
02:51 ababu joined #gluster
02:52 decimoe left #gluster
02:52 bharata-rao joined #gluster
02:55 ACiDGRiM can a sparse file on one replica take up less space on the other replica?
03:00 m0zes joined #gluster
03:12 overclk ACiDGRiM: are FSes on each replica the same (fs type, etc...)
03:41 ACiDGRiM yes, all are xfs with the documentation recommended options
03:41 ACiDGRiM all bricks are LVM with equal block count
03:42 ACiDGRiM replica 2 was emptied, and then a full heal was initiated with the same difference
03:43 ACiDGRiM about 100GB difference
03:43 shubhendu__ joined #gluster
03:45 overclk ACiDGRiM: ah, ok. so there's healing in picture.
03:52 nbalachandran joined #gluster
03:55 ACiDGRiM yes, I'm currently trying to resolve this, but the last 100GB won't heal
03:55 ACiDGRiM This is the 2nd time I've tried a full wipe and reheal due to this behavior
03:56 ACiDGRiM Thanks for your help
03:59 zerick joined #gluster
04:00 rjoseph joined #gluster
04:01 ACiDGRiM I found this discrepency while checking freespace after using the split mount script
04:06 ACiDGRiM Is it safe to delete files directly out of a brick?
04:07 kshlm joined #gluster
04:13 jiku joined #gluster
04:14 spandit joined #gluster
04:25 itisravi joined #gluster
04:37 kdhananjay joined #gluster
04:39 coredump joined #gluster
04:39 Rafi_kc joined #gluster
04:40 anoopcs joined #gluster
04:43 rastar joined #gluster
04:48 hagarth joined #gluster
04:48 prasanth_ joined #gluster
04:49 ACiDGRiM I found a safe way to fix the large heal descrepency I'm seeing. I mounted the volume to the node with the proper used space and copied the who directory structure of that listed in heal info to another directory of the volume. This then resolved the difference of some files that wouldn't heal over. I then removed the whole original directory that was the location of the discrepency from each replica under the seperate mount points created by the splitmoun
04:49 ACiDGRiM t script and then moved the recovered folders back to their original location
04:51 overclk joined #gluster
04:53 anoopcs joined #gluster
04:53 ramteid joined #gluster
05:07 jiffin joined #gluster
05:11 vpshastry joined #gluster
05:12 kdhananjay joined #gluster
05:12 raghug joined #gluster
05:13 ppai joined #gluster
05:17 a2_ joined #gluster
05:17 avati joined #gluster
05:18 RameshN joined #gluster
05:19 sahina joined #gluster
05:20 coredump joined #gluster
05:21 karnan joined #gluster
05:22 ricky-ticky joined #gluster
05:23 nbalachandran joined #gluster
05:24 dusmant joined #gluster
05:24 ACiDGRiM what does this mean?
05:24 ACiDGRiM Brick replica1:/mnt/99d6abab-067e-4e65-8469-dbdccdaeab74/0d875814-0bb5-4834-818b-6a50030538f8/gluster/
05:24 ACiDGRiM /
05:24 ACiDGRiM Number of entries: 1
05:24 ACiDGRiM the folder that needs healing is "/"?
05:34 ndarshan joined #gluster
05:35 deepakcs joined #gluster
05:45 hagarth joined #gluster
05:55 rastar joined #gluster
05:57 overclk joined #gluster
05:58 lalatenduM joined #gluster
06:00 LebedevRI joined #gluster
06:02 shubhendu__ joined #gluster
06:08 nshaikh joined #gluster
06:19 ws2k33 joined #gluster
06:23 gildub joined #gluster
06:28 kumar joined #gluster
06:32 cris_mac joined #gluster
06:33 cristov_mac joined #gluster
06:34 ninkotech joined #gluster
06:36 keytab joined #gluster
06:37 mbukatov joined #gluster
06:37 cristov_mac joined #gluster
06:40 cristov_mac joined #gluster
06:41 BossR joined #gluster
06:44 hagarth joined #gluster
06:45 ninkotech joined #gluster
06:50 deepakcs joined #gluster
07:00 shubhendu__ joined #gluster
07:01 gildub_ joined #gluster
07:04 mbukatov joined #gluster
07:06 rastar joined #gluster
07:07 ctria joined #gluster
07:08 vpshastry1 joined #gluster
07:10 RameshN joined #gluster
07:22 rjoseph joined #gluster
07:22 dusmant joined #gluster
07:24 ekuric joined #gluster
07:28 vpshastry joined #gluster
07:34 ninkotech joined #gluster
07:34 rjoseph left #gluster
07:34 overclk joined #gluster
07:54 overclk joined #gluster
07:57 kdhananjay joined #gluster
08:02 liquidat joined #gluster
08:09 ekuric joined #gluster
08:10 dusmant joined #gluster
08:16 gothos left #gluster
08:18 nbalachandran joined #gluster
08:27 itisravi_ joined #gluster
08:41 [ilin] joined #gluster
08:45 Slashman joined #gluster
08:49 [ilin] i have a two server gluster - from gluster volume status i see that the self-heal daemon is not running on both hosts... how can i turn it on?
08:50 mhoungbo joined #gluster
09:05 vimal joined #gluster
09:11 prasanth_ joined #gluster
09:14 raghu joined #gluster
09:30 deepakcs joined #gluster
09:50 nbalachandran joined #gluster
09:53 overclk joined #gluster
09:55 aravindavk joined #gluster
10:15 bjornar joined #gluster
10:21 kaushal_ joined #gluster
10:21 haomaiwa_ joined #gluster
10:22 dockbram joined #gluster
10:23 nbalachandran joined #gluster
10:23 Alex____2 joined #gluster
10:23 cmtime joined #gluster
10:23 lalatenduM joined #gluster
10:24 ricky-ticky joined #gluster
10:24 purpleidea joined #gluster
10:24 prasanth_ joined #gluster
10:25 marcoceppi joined #gluster
10:25 liquidat joined #gluster
10:26 ghenry joined #gluster
10:26 ghenry joined #gluster
10:30 ppai joined #gluster
10:32 overclk joined #gluster
10:33 aravindavk joined #gluster
10:36 ira joined #gluster
10:38 edward1 joined #gluster
10:45 msbrogli joined #gluster
11:03 overclk joined #gluster
11:14 ppai joined #gluster
11:35 mojibake joined #gluster
11:37 overclk joined #gluster
11:41 diegows joined #gluster
11:42 haomaiwa_ joined #gluster
12:01 julim joined #gluster
12:10 stickyboy joined #gluster
12:12 siel joined #gluster
12:16 B21956 joined #gluster
12:22 mojibake A beginners question. Still learning and exploring. I created a "3 x 2 = 6"(replica 2) Volume. However for testing purposes I have only turned on 2 of the 6 nodes. But if I run "gluster volume info", or "gluster volume status {volume}" there is no feed back like "four bricks are down" or "volume in failed state". Am I missing something or do I have wrong expectations of what those commands should be outputting? Maybe I am using the wrong
12:22 mojibake commands?
12:29 chirino joined #gluster
12:30 siel joined #gluster
12:32 skippy joined #gluster
12:34 [ilin] mojibake: how did you create the volume?
12:37 bit4man joined #gluster
12:39 mojibake hmm, looking back at my history "gluster volume create web-content replica 2 server1:/export/gfsvol1/brick1...etc"
12:40 mojibake As stated was just testing. So I did not turn on some nodes, and I could tell some test files were missing until I turned on at least 3 of the brick in the distributed set.
12:41 mojibake But the volume status nor volume info gave any heads up that volume was in depreciated state
12:42 mojibake Volume Name: web-content
12:42 mojibake Type: Distributed-Replicate
12:42 mojibake Volume ID: 9a334e02-56f2-4538-9af4-6a5dc9c3721e
12:42 mojibake Status: Started
12:42 mojibake Number of Bricks: 3 x 2 = 6
12:42 mojibake Transport-type: tcp
12:42 mojibake Bricks:
12:54 bennyturns joined #gluster
13:05 bala joined #gluster
13:22 chirino joined #gluster
13:22 gmcwhistler joined #gluster
13:23 glusterbot New news from newglusterbugs: [Bug 1128165] mount.glusterfs fails to check return of mount command. <https://bugzilla.redhat.com/show_bug.cgi?id=1128165>
13:24 ninkotech_ joined #gluster
13:24 ninkotech__ joined #gluster
13:31 luckyinva joined #gluster
13:31 msmith joined #gluster
13:38 luckyinva Is it possible to change from a distrubuted to replicate config on existing volumes
13:39 sputnik13 joined #gluster
13:39 luckyinva i created a 3 node volume and forgot to pass the replicate value when doing so
13:42 daxatlas joined #gluster
13:45 bala joined #gluster
13:58 qdk joined #gluster
14:06 wushudoin joined #gluster
14:08 rwheeler joined #gluster
14:09 theron joined #gluster
14:09 ProT-0-TypE joined #gluster
14:10 msmith joined #gluster
14:11 mbukatov joined #gluster
14:15 bala joined #gluster
14:15 UnwashedMeme left #gluster
14:22 plarsen joined #gluster
14:23 hagarth joined #gluster
14:26 siel joined #gluster
14:26 siel joined #gluster
14:27 andreask joined #gluster
14:46 tdasilva joined #gluster
14:57 theron joined #gluster
15:01 shubhendu__ joined #gluster
15:02 hagarth joined #gluster
15:08 nbalachandran joined #gluster
15:11 ndk joined #gluster
15:12 ninthBit joined #gluster
15:14 giannello joined #gluster
15:16 overclk joined #gluster
15:16 chirino joined #gluster
15:29 daxatlas Question: I'm trying to clean up a split-brain scenario on Gluster 3.4.1 (Cent6.5) via http://www.joejulian.name/blog/fixing-split-brain-with-glusterfs-33/. If I'm clearing files, the commands in the post seem perfect. If I'm clearing directories, I can make it rm -rf the directory, but not the gfid. So, if I kill the directories and don't kill the gfids of the files inside them, I'm afraid I'm going to have a bunch of orphan gfids that will cause issues
15:39 rotbeard joined #gluster
15:40 bennyturns joined #gluster
15:46 bennyturns joined #gluster
16:02 msbrogli joined #gluster
16:18 harish_ joined #gluster
16:31 daxatlas joined #gluster
16:48 sputnik13 joined #gluster
16:49 sputnik13 joined #gluster
16:50 JoeJulian daxatlas: I consider split-brain directories a bug, especially if the uid/gid/permissions match. Unfortunately, the only way to clear that is to zero out the trusted.afr attributes on a brick.
16:50 JoeJulian @extended attributes
16:50 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
16:52 daxatlas You clear the trusted.afr attributes on the whole brick? Or just on the directory in question?
16:52 diegows joined #gluster
16:53 JoeJulian Just the directory
16:54 daxatlas Okay. And it doesn't have to be recursive? It's just the directory itself, not requiring the files inside?
17:00 JoeJulian Right
17:21 kumar joined #gluster
17:30 rwheeler joined #gluster
17:41 mojibake joined #gluster
17:43 [ilin] left #gluster
17:44 daxatlas joined #gluster
17:47 daxatlas Sweet. Thanks JoeJulian. :)
18:46 swebb joined #gluster
18:56 LebedevRI joined #gluster
18:58 siel joined #gluster
18:58 siel joined #gluster
19:04 daxatlas joined #gluster
19:35 msmith_ joined #gluster
19:43 eshy joined #gluster
19:43 eshy joined #gluster
19:50 daxatlas joined #gluster
20:00 overclk joined #gluster
20:55 systemonkey joined #gluster
20:56 theron_ joined #gluster
20:57 theron joined #gluster
20:58 Alex____2 joined #gluster
21:09 sjm joined #gluster
21:11 quique joined #gluster
21:15 quique i have a large share (1TB) that previously was a samba share, i'd like to make available over gluster. it's an lvm that i'll connect to the guster vm, I'd like to know if there another way to make the files available without having to mount the gluster share on the gluster server and copy them over?
21:16 theron_ joined #gluster
21:17 JoeJulian Do you plan on creating a replicated volume, or are you just going to have the one brick?
21:19 quique probably one brick, but maybe not.  why does that matter?
21:19 sjm left #gluster
21:19 JoeJulian Just when deciding how detailed I need to get.
21:19 sjm joined #gluster
21:21 JoeJulian I know it will work with 3.4 to just create a new volume with that populated drive as one of the bricks. I /think/ that would work with 3.5 as well, but I've had people reporting issues with regard to replicated volumes. Since you're not doing that, I'd probably try it.
21:22 JoeJulian Worst case, you create the volume, see errors in your logs. You mount the client and run a find on the client mountpoint. That seems to fix any errors people see in the logs.
21:52 balacafalata joined #gluster
22:15 caiozanolla joined #gluster
22:17 caiozanolla i have a 2 node replicated cluster running 3.5.2. Have replaced both nodes (one at a time) recently due to hardware issues. Not sure when but my self healing daemon stopped working and its reported as stopped on "gluster volume status". how do I turn them back on?
22:20 MacWinner joined #gluster
22:41 sputnik13 joined #gluster
22:42 qdk joined #gluster
22:48 BossR joined #gluster
22:50 BossR Good evening people - I am curious if anyone has experience running GlusterFS with nginx as a web cluster... Before I get too involved I wanted to know if it is even possible and how the performance is
22:51 msmith joined #gluster
22:54 sputnik13 joined #gluster
23:29 sputnik13 joined #gluster
23:50 sjm left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary