Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 baudster joined #gluster
00:07 misc randomly kill-9 process on the cluster ?
00:07 misc I am quite sure there is a tool to corrupt fs
00:08 misc https://serverfault.com/questions/​40302/how-to-corrupt-a-file-system
00:08 glusterbot Title: linux - How to corrupt a file system - Server Fault (at serverfault.com)
00:11 eightyeight Haha
00:12 eightyeight I'm trying to find exercises that produce some sort of observable error, either via Gluster or otherwise, so they can simulate the troubleshooting experience.
00:12 eightyeight Maybe I should hit the mailing list, and see what people need help with there.
00:15 misc do they already know gluster ?
00:15 misc or they are discovering ?
00:17 nigelb joined #gluster
00:21 eightyeight They will have had a strong introductory course on several topics.
00:22 eightyeight By this point, while not experts, they're not virgin either.
00:43 JoeJulian eightyeight: split-brain data won't be detectable unless you have bitrot enabled. You can set split-brain metadata with xattrs for them to solve.
00:45 JoeJulian You could set up some sparse VM images that, if full, take up 150% of the brick size without using sharding (because it wasn't available when they did this), share the bricks with other volumes in a way that allows a mismatched utilization of bricks, then don't do anything to avoid the disaster until one of the bricks is 100% full, then expect some plan that doesn't cause hardship to the customer - even though you told them two years ago that
00:45 JoeJulian having 20TB bricks on 60TB bricks was a horribly bad idea.... (I'm not bitter).
00:56 shdeng joined #gluster
00:58 caitnop joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 Lavoaster_ joined #gluster
01:54 Lavoaster_ Hey, I've got a very large directory that's taking way too long to even start deleting files from mounting servers. How would I go about deleting the files directly from the brick?
02:00 harish joined #gluster
02:01 JoeJulian Lavoaster_: How are you deleting the files?
02:01 Lavoaster_ find . -name "*.pdf" -print0 | xargs -0 rm
02:03 Lavoaster_ I'm currently attempting the rsync -a --delete option, but that is still "sending incremental file list"
02:03 JoeJulian So... find is going to have to stat every directory entry to determine if it's a directory in which to traverse. That's going to perform a self-heal check with every lookup (part of the fstat i/o process).
02:03 Lavoaster_ These commands I'm running aren't on the brick atm. I'm attempting to delete from the mount so that it replicates.
02:05 JoeJulian /bin/rm * should actually run faster
02:07 Lavoaster_ The directory currently has ~627,389 files in.
02:07 JoeJulian To delete from the bricks, you'll need to delete the file and its hardlink from the .glusterfs directory (or symlink if it's a directory). See https://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/ to understand how the .glusterfs directory works.
02:07 glusterbot Title: What is this new .glusterfs directory in 3.3? (at joejulian.name)
02:08 JoeJulian I assume there's a lot more than 600k files you want to keep?
02:09 Lavoaster_ Yeah :(
02:11 JoeJulian If you created a list of files (not directories) from the brick, and used that list to rm, it would still be faster than using find on the client mount.
02:15 magrawal joined #gluster
02:15 Lavoaster_ Ah yes, that is indeed much faster. At least it's not spending time trying to get every single file name from that directory now. Thanks JoeJulian.
02:15 JoeJulian You're welcome. :)
02:15 Lavoaster_ Could have gone to bed hours ago instead of hoping that a command would just start running.
02:15 JoeJulian Hehe, been there, done that.
02:16 Lavoaster_ Well, at least I now have the knowledge. x)
02:16 JoeJulian I think worse is when it's started and you realize you should just stop it because there's a more efficient way that will still finish faster.
02:18 Lavoaster_ Yeah x)
02:18 Lavoaster_ This cleanup looks like it's going to take a few hours.
02:19 Lavoaster_ I can probably go sleep now. Thanks again.
02:19 daMaestro joined #gluster
02:19 JoeJulian Glad I could help.
02:19 Lavoaster_ left #gluster
02:23 d0nn1e joined #gluster
02:39 daMaestro heya JoeJulian. ltns
02:39 daMaestro JoeJulian, thanks for keeping up the good work with gluster. i keep seeing solid forward progress making it into fedora and other platforms.
02:39 daMaestro JoeJulian, things have really come along since the 2.x days ;-)
02:40 JoeJulian daMaestro: Meh, I just hang out and BS with people. The devs do all the work.
02:40 daMaestro and actually everyone here that contributes. i just recall JoeJulian from way back in the day.
02:40 daMaestro yeah kkeithley took over package maintenance for me. it was a relief ;-)
02:40 JoeJulian And now he has a whole team doing it.
02:41 JoeJulian I was about to. I was working toward becoming an approved packager. He just had an easier in, being a RH employee and all.
02:41 daMaestro good to hear. i keep up with the package stream, even now
02:42 daMaestro yeah, i was actually rather pleased RH put a fulltime on it. i had $dayjob to deal with and we outgrew... well changed hardware platforms... gluster
02:43 JoeJulian What're you using now?
02:43 daMaestro we hit half a petabyte and it was getting unwieldy on top of our hardware and the EL5 XFS drivers
02:43 JoeJulian Ah, yeah, those were old and slow.
02:43 daMaestro had brick corruption on a few volumes under a distribute filter and basically lost access to those paths forever (xattr corruption at the xfs layer)
02:44 JoeJulian argh
02:44 daMaestro we were able to recover (thanks glusterfs design), but it was something we needed to address
02:44 daMaestro no data loss, but the distribute path was basically b0rked
02:45 JoeJulian Right, that was pre gfid, too, so you would have multiple copies of all the files when that broke.
02:45 daMaestro yup, exactly
02:46 daMaestro before the new design we had mirrored bricks under a distribute filter on top of el5 xfs
02:46 daMaestro it really did well... until it didn't. but that is all distributed filesystems
02:47 daMaestro we now have approaching two petabytes on top of onefs
02:48 JoeJulian Ah yes, lots of money. :)
02:48 daMaestro yeah, once emc bought them.. the pricing model changed enough for me to take notice
02:48 JoeJulian $/TB is too much for our tastes.
02:48 daMaestro the fs itself is actually rather novel. and i love it's a BSD core with some cool mods
02:49 JoeJulian I've seen the code. It's... interesting.
02:49 daMaestro it's also commodity hardware (at least back in the day, no idea what emc now dell is gonna do with it)
02:49 daMaestro if interesting works, i'm cool with it
02:49 daMaestro i just did a migration of half a PB, it took over a week to complete, but operator time was about 12 hours
02:50 JoeJulian Most of the troubles they had back then were nfs related. That was also pre-acquisition by a few years.
02:50 daMaestro JoeJulian, so a new use-case (vs my current bulk storage WORM) is virt disk storage
02:51 daMaestro JoeJulian, considering gluster for that. any experiences with it?
02:51 daMaestro JoeJulian, i know back in the day it was... wild west
02:51 daMaestro at least in my experience
02:53 JoeJulian I've done both gluster and ceph. Ceph wins for rapid snapshotting, but gluster is my preference for every other reason.
02:54 JoeJulian It is kind-of cool, though, to be able to spin up 1000 vms in under 10 seconds.
02:54 daMaestro what are you putting gluster under? (i've used ceph with openstack)
02:54 JoeJulian openstack
02:55 JoeJulian Apparently gluster's also very popular with oVirt.
02:58 daMaestro yeah... two flocks ago i caught wind that oVirt was still actively being worked on. glad to see that.
02:58 daMaestro some of the building blocks are really, really valuable
02:58 daMaestro missed the flock in poland this year due to $dayjob conflict
02:59 JoeJulian Looks like it's great for people that love vmware.
02:59 JoeJulian Seems every bit as cumbersome.
02:59 nbalacha joined #gluster
03:05 muneerse joined #gluster
03:06 daMaestro so i've used gluster for backing qemu machine images in lab environments and it worked well enough
03:07 daMaestro i wonder if it's going to be useful for "solving" the docker VOLUME problem
03:07 daMaestro i've not looked under the hood of openshift origin, but wonder if it's there
03:07 daMaestro i guess it really just matters if the nfs server translator has stabilized?
03:08 daMaestro now that fuse is... well... headed towards no-longer-viable
03:08 daMaestro $0.02
03:09 muneerse2 joined #gluster
03:24 kdhananjay joined #gluster
03:29 JoeJulian daMaestro: I've expressed my thoughts on docker here before. I'm not even trying to put any effort into figuring out storage for that.
03:30 JoeJulian well, my wife's just arrived home. I'm out.
03:34 kramdoss_ joined #gluster
03:34 gem joined #gluster
03:41 itisravi joined #gluster
03:50 atinm joined #gluster
04:01 ooben joined #gluster
04:01 ooben Does anyone know of a citation I could use for Gluster when referring to it in an academic paper?
04:01 aravindavk joined #gluster
04:04 hagarth ooben: http://www.osti.gov/scitech/biblio/1048672
04:04 glusterbot Title: GlusterFS One Storage Server to Rule Them All (Conference) | SciTech Connect (at www.osti.gov)
04:04 hagarth ooben: http://dl.acm.org/citation.cfm?id=2555790
04:04 glusterbot Title: Scale out with GlusterFS (at dl.acm.org)
04:05 ooben hagarth thank you!
04:06 hagarth ooben: yw, are you writing an academic paper?
04:07 prth joined #gluster
04:08 ooben hagarth yes, trying to target https://scinet.supercomputing.org/workshop . we briefly used glusterfs in a benchmark
04:08 glusterbot Title: scinet.supercomputing.org (at scinet.supercomputing.org)
04:10 karthik_ joined #gluster
04:10 hagarth ooben: cool, let us know if you get in there :)
04:11 ooben sure, thank you!
04:12 prth joined #gluster
04:14 mchangir joined #gluster
04:15 kotreshhr joined #gluster
04:15 rwheeler joined #gluster
04:33 ashiq joined #gluster
04:35 Philambdo joined #gluster
04:46 auzty joined #gluster
04:53 cholcombe joined #gluster
04:56 ramky joined #gluster
04:56 rafi joined #gluster
05:00 ndarshan joined #gluster
05:03 Saravanakmr joined #gluster
05:04 skoduri joined #gluster
05:05 kdhananjay joined #gluster
05:07 satya4ever joined #gluster
05:13 kshlm joined #gluster
05:13 aspandey joined #gluster
05:13 karthik_ joined #gluster
05:20 aravindavk joined #gluster
05:23 RameshN joined #gluster
05:24 k4n0 joined #gluster
05:24 [diablo] joined #gluster
05:28 Philambdo joined #gluster
05:37 kdhananjay joined #gluster
05:41 Lee1092 joined #gluster
05:41 ieth0 joined #gluster
05:41 jiffin joined #gluster
05:46 ppai joined #gluster
05:50 RameshN joined #gluster
05:50 mhulsman joined #gluster
05:53 mhulsman1 joined #gluster
05:59 ppai joined #gluster
06:02 ankitraj joined #gluster
06:02 hgowtham joined #gluster
06:14 karnan joined #gluster
06:15 kdhananjay joined #gluster
06:29 jtux joined #gluster
06:32 Gnomethrower joined #gluster
06:36 Gambit15 joined #gluster
06:40 jwd joined #gluster
06:44 ieth0 joined #gluster
06:56 jri joined #gluster
06:58 Klas !paste
06:59 jiffin @paste
06:59 glusterbot jiffin: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
07:00 Klas https://paste.fedoraproject.org/427849/47383634/
07:00 Klas I'm getting loads of issues like this, enough for several gigagbytes of data on clients.
07:00 glusterbot Title: #427849 • Fedora Project Pastebin (at paste.fedoraproject.org)
07:00 Klas jiffin: thanks mate =)
07:00 Klas (I scrolled back to a complaint about pastebin to find fpaste instead)
07:00 d4n13L joined #gluster
07:01 Klas "gluster volume heal icinga-lab info split-brain" on servers says that nothing is in split-brain
07:01 jiffin Klas: :)
07:06 partner joined #gluster
07:07 petan joined #gluster
07:09 mlhess joined #gluster
07:15 rafi1 joined #gluster
07:29 fsimonce joined #gluster
07:33 aravindavk joined #gluster
07:36 jkroon joined #gluster
07:40 Klas I've used replace-brick and now I can see that a file is broken, but not with split-brain option, how do I replace it?
07:49 xMopxShell joined #gluster
07:51 rafi1 joined #gluster
07:57 legreffier joined #gluster
08:05 shortdudey123 joined #gluster
08:14 [diablo] joined #gluster
08:16 Slashman joined #gluster
08:17 shdeng joined #gluster
08:19 prth joined #gluster
08:21 aspandey joined #gluster
08:22 mhulsman joined #gluster
08:24 mhulsman1 joined #gluster
08:35 gem joined #gluster
08:35 atinm joined #gluster
08:40 robb_nl joined #gluster
08:45 devyani7 joined #gluster
08:52 derjohn_mob joined #gluster
08:52 mhulsman joined #gluster
08:57 mhulsman joined #gluster
09:14 RameshN joined #gluster
09:15 RameshN joined #gluster
09:16 Slashman hello, is it possible to use a glusterfs client 3.7.x to connect to a glusterfs 3.6.x daemon?
09:17 Wizek_ joined #gluster
09:20 xavih joined #gluster
09:20 malevolent joined #gluster
09:22 nohitall joined #gluster
09:23 poornima joined #gluster
09:23 post-factum Slashman: any reason to do that?
09:24 nohitall left #gluster
09:25 Slashman post-factum: the default version of glusterfs is 3.7.x on ubuntu 16.04, it easier to use that, but the daemon running the volumes on other servers is on 3.06.x, but I think it works, I cannot find any reference to something like that not working
09:27 legreffier joined #gluster
09:29 mhulsman joined #gluster
09:30 atinm joined #gluster
09:30 mhulsman joined #gluster
09:33 post-factum Slashman: i'd upgrade everything first
09:33 post-factum Slashman: i remember 3.6+3.7 issues, but those were regarding server2server interconnection
09:33 post-factum Slashman: not sure if clients were affected
09:34 Slashman post-factum: I cannot update the servers yet, it's planned but cannot be done just now
09:38 ju5t joined #gluster
09:42 nbalacha joined #gluster
09:42 mchangir joined #gluster
09:43 rastar joined #gluster
09:44 mhulsman1 joined #gluster
09:45 shdeng joined #gluster
09:46 wiza joined #gluster
09:51 karnan joined #gluster
09:53 mhulsman joined #gluster
09:53 mhulsman2 joined #gluster
09:55 mhulsman1 joined #gluster
09:56 jwd joined #gluster
09:58 derjohn_mob joined #gluster
10:08 Philambdo joined #gluster
10:17 nishanth joined #gluster
10:24 nbalacha joined #gluster
10:25 harish joined #gluster
10:27 plarsen joined #gluster
10:30 slunatecqo joined #gluster
10:30 mhulsman joined #gluster
10:32 B21956 joined #gluster
10:32 mhulsman1 joined #gluster
10:34 HitexLT joined #gluster
10:36 slunatecqo If I want to run gluster in docker container, I have to publish the ports. I did successfully managed to do it. But when I am creating a gluster volume, I have to specify IP address. The IP address of the host is not in gluster cluster and the container IP is not visible from internet. Any ideas how could I manage to set the IP to host?
10:36 HitexLT Hi guys. I have one question. Is it possible to somehow force full access for all users to mounted glusterfs storage? We have one gluster storage and various servers with different apps and users should be able to read and write the same files.
10:41 B21956 HitexLT: I'll have to doublecheck but I think if you set the bricks directory permissions stickybit so that all subdirectories have RW for all you should be GTG
10:50 rastar joined #gluster
10:51 poornima joined #gluster
10:52 HitexLT B21956: Wouldn't tampering with brick's directory potentially brick replication or have any other unwanted problems?
10:57 karnan joined #gluster
10:59 plarsen joined #gluster
10:59 mhulsman joined #gluster
11:00 jri_ joined #gluster
11:03 mhulsman1 joined #gluster
11:07 ashiq joined #gluster
11:08 jiffin1 joined #gluster
11:27 robb_nl joined #gluster
11:29 mhulsman joined #gluster
11:30 plarsen joined #gluster
11:30 plarsen joined #gluster
11:30 ieth0 joined #gluster
11:31 mhulsman1 joined #gluster
11:32 bit4man joined #gluster
11:42 ankitraj joined #gluster
11:44 plarsen joined #gluster
11:44 Saravanakmr joined #gluster
11:45 arcolife joined #gluster
11:48 prth joined #gluster
11:48 ankitraj joined #gluster
11:48 gem joined #gluster
11:52 spalai1 joined #gluster
12:00 jdarcy joined #gluster
12:01 ankitraj #info #startmeeting Gluster Community Meeting
12:06 xavih joined #gluster
12:06 rastar joined #gluster
12:07 malevolent joined #gluster
12:11 kotreshhr1 joined #gluster
12:13 nishanth joined #gluster
12:22 kramdoss_ joined #gluster
12:34 plarsen joined #gluster
12:35 prth joined #gluster
12:40 mchangir joined #gluster
12:43 kotreshhr joined #gluster
12:43 shyam joined #gluster
12:43 ashiq joined #gluster
12:44 mhulsman joined #gluster
12:46 jiffin joined #gluster
12:53 Klas Slashman: I'd strongly recommend rolling your own, the ubuntu versions are seriously atrocious
12:54 plarsen joined #gluster
12:55 Slashman Klas: I used nfs until I can migrate everything to 3.8, I'll use the official ppa from gluster then
13:01 mhulsman1 joined #gluster
13:07 nbalacha joined #gluster
13:08 mchangir joined #gluster
13:09 ira joined #gluster
13:10 atinm joined #gluster
13:13 kim__ joined #gluster
13:15 jiffin joined #gluster
13:16 kimmeh got a question about replication. is self heal automatic or does it have to be triggered by a filesystem op?
13:16 spalai1 left #gluster
13:21 shyam joined #gluster
13:31 skylar joined #gluster
13:31 kshlm joined #gluster
13:33 mreamy joined #gluster
13:35 Klas Slashman: that is probably wise, we build our own packages since it's required by our organisation, kind off
13:35 Klas (but we will probably try to keep building new versions as they roll out
13:37 jwd joined #gluster
13:52 nbalacha joined #gluster
13:55 baudster joined #gluster
13:57 RameshN joined #gluster
14:02 derjohn_mob joined #gluster
14:06 kotreshhr left #gluster
14:13 plarsen joined #gluster
14:14 shyam joined #gluster
14:14 unclemarc joined #gluster
14:21 derjohn_mob joined #gluster
14:24 jkroon joined #gluster
14:26 squizzi joined #gluster
14:31 adminxor joined #gluster
14:42 bowhunter joined #gluster
14:44 kramdoss_ joined #gluster
14:45 derjohn_mob joined #gluster
14:49 [diablo] joined #gluster
14:50 hagarth joined #gluster
14:51 Muthu_ joined #gluster
14:59 ajneil joined #gluster
15:00 [diablo] joined #gluster
15:03 derjohn_mob joined #gluster
15:06 ajneil does any one have any tips for keeping incremental backups via e.g. amanda of gluster volumes.
15:07 ajneil I am struggling with timeouts dues to the metadata intense operations
15:08 ajneil I am contemplating the following which may be utter crack
15:09 ajneil create a snapshot of the volume, then use the brick directly instead of gluster as a source of the backup
15:10 ajneil I'm assuming with a snapshot in place all the original bricks will remain consistent untill the snapshot is removed.
15:11 MadPsy there's several parameters to help with that.. such as cluster.lookup-optimize etc. Also, I find NFS faster than the FUSE mount quicker for lots of small operations
15:12 ajneil Which NFS though, I have been using the build in NFS and it has been crashing under load, I am going to migrate to ganesha but thats an all or nothing switch.
15:12 MadPsy cluster.readdir-optimize. performance.quick-read etc all worth looking into
15:14 shyam joined #gluster
15:15 ajneil I have cluster.readdir-optimize, and performance.quick-read on already, any caveats with cluster.lookup-optimize?
15:19 ajneil hmm it seems that cluster.lookup-optimize is only relevant for distribute volumes, mine are straight replica 3
15:22 MadPsy not sure tbh
15:24 wushudoin joined #gluster
15:31 cholcombe joined #gluster
15:36 jkroon joined #gluster
15:39 plarsen joined #gluster
15:40 slunatecqo left #gluster
15:48 bluenemo joined #gluster
15:52 cholcombe joined #gluster
15:54 derjohn_mob joined #gluster
15:58 pedrogibson joined #gluster
16:04 pedrogibson JoeJulian - thanks for reply.  My apprehension about using hooks, is two fold:  1) in the gluster install that currently has the spurious logging issue and needs an feature/option set, does not have any hook scripts.   2) We have another gluster install of equal size that does not have the same problem and review of its .vol files show these features/attributes set; however there are no hook scripts in that system either.
16:05 pedrogibson JoeJulian:  the problem we have is spurious logging.   And bugzilla.redhat shows the two .vol features that need to be set to eliminate this flooding of logs.
16:06 JoeJulian and there's nothing different between them? save version, volume options, etc?
16:06 JoeJulian And which bug is this?
16:06 JoeJulian s/save/same/
16:06 glusterbot What JoeJulian meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
16:06 glusterbot What JoeJulian meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
16:06 glusterbot What JoeJulian meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
16:06 glusterbot What JoeJulian meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
16:06 * JoeJulian whacks glusterbot
16:07 glusterbot JoeJulian: Error: ProcessTimeoutError: Process #288 (for String.re) aborted due to timeout.
16:07 glusterbot JoeJulian: Error: ProcessTimeoutError: Process #287 (for String.re) aborted due to timeout.
16:07 glusterbot JoeJulian: Error: ProcessTimeoutError: Process #289 (for String.re) aborted due to timeout.
16:07 pedrogibson the two installs: #1 (with issue) was installed from 3.4 and was upgraded.  #2 was installed with later version (i think 3.7.x) and both have since been upgraded to 3.7.14
16:08 pedrogibson the bug bugzilla link discussing is:  bug ID 1313567
16:08 JoeJulian bug 1313567
16:09 JoeJulian hmm
16:09 pedrogibson yes..
16:09 JoeJulian glusterbot is misbehaving
16:09 JoeJulian My whole machine is misbehaving...
16:10 pedrogibson Nonetheless, you have opened my eyes to eventing and hooks so that we can provide better management/monitoring for our deployments.
16:11 kpease joined #gluster
16:13 Gnomethrower joined #gluster
16:14 JoeJulian I'm guessing this must be a failure of the upgrade process that was used.
16:14 Gnomethrower joined #gluster
16:15 JoeJulian On #1, rename all the vol files and run "glusterd --xlator-option *.upgrade=on -N"
16:15 pedrogibson that could be..  despite the fact that all is OK.  the 2nd system (without issue) was installed from a latter version that seemed ot have already addressed the issue. while the first system was originally installed with a version that did not support this feature/attribute
16:15 JoeJulian Mmmm, I also wonder if this is opver related. Let me look at the source...
16:17 pedrogibson thanks for the info. i will do the above upgrade command after you look at source and  and let you know if you still feel the renaming is the only solution.  If so it will take a me a week before we can do it given current production status.
16:19 pedrogibson JoeJulian - gotta run now.. thanks for your input/guidance.  Saludos
16:20 JoeJulian pedrogibson: Nope, not opver related so just run that upgrade command after renaming the vol files and I expect that'll fix it for you.
16:21 pedrogibson ok.. thanks for quick turn around.  will do rename as you suggest and let you know outcome.  Allow me a week to get back to you.  thanks.
16:21 pedrogibson left #gluster
16:21 JoeJulian I'll be here
16:23 gem joined #gluster
16:23 Gambit15 joined #gluster
16:32 derjohn_mob joined #gluster
16:36 hagarth joined #gluster
16:36 shyam joined #gluster
16:58 d0nn1e joined #gluster
17:05 mhulsman joined #gluster
17:07 shyam joined #gluster
17:08 glusterbot joined #gluster
17:08 skoduri joined #gluster
17:09 BitByteNybble110 joined #gluster
17:12 jri joined #gluster
17:13 kimmeh joined #gluster
17:13 jiffin joined #gluster
17:21 mhulsman joined #gluster
17:27 mhulsman joined #gluster
17:27 rafi joined #gluster
17:28 ieth0 joined #gluster
17:30 iopsnax joined #gluster
17:38 prth joined #gluster
17:45 d0nn1e joined #gluster
17:51 jwd joined #gluster
17:58 shyam joined #gluster
18:00 d0nn1e joined #gluster
18:05 prth joined #gluster
18:27 Wizek_ joined #gluster
18:33 robb_nl joined #gluster
18:36 kpease joined #gluster
18:39 rafi joined #gluster
18:42 kpease joined #gluster
18:42 jiffin joined #gluster
18:43 cliluw How far away is New Style Replication? I know software estimates are bunk but it would help.
18:48 roost joined #gluster
18:51 social joined #gluster
19:13 adminxor joined #gluster
19:14 mhulsman joined #gluster
19:35 MikeLupe joined #gluster
19:37 MikeLupe hello - I'm afraid I have to ask again about extending a r3 a 1 volume with a local disk on each node. Does someone have time to give me some hints?
19:42 MikeLupe I extended the main LVM volume group with a created physical volume (3rd disk in node). I then extended the logical volume with the rest of the (newly) availabe space. That worked - so my LV has the entire space with the 3rd disk. but I'm not able to resize2fs, I get "Bad magic number in super-block while trying to open /dev/gluster_vg1/data . Couldn't find valid filesystem superblock"
19:56 Klas joined #gluster
19:59 JoeJulian I would venture to guess, therefore, that the filesystem is not ext2/3/4.
20:03 JoeJulian cliluw: according to https://bugzilla.redhat.co​m/show_bug.cgi?id=1158654 it should be in 3.8. I don't believe it.
20:04 cliluw JoeJulian: I don't believe it either. If it's in 3.8, that means it's already out.
20:05 JoeJulian It says it's intended to be a tracking bug, but most of the bugs linked to it are still open.
20:05 JoeJulian I'm guessing that it's all ndevos' fault.
20:20 glusterbot joined #gluster
20:20 bowhunter joined #gluster
20:22 hagarth joined #gluster
20:23 shyam joined #gluster
20:25 MikeLupe JoeJulian: it's xfs
20:26 JoeJulian MikeLupe: bingo, so how do you grow an xfs filesystem?
20:26 MikeLupe xfs_growfs
20:26 MikeLupe ;)
20:26 JoeJulian :)
20:27 MikeLupe so easy - thx ;)
20:27 JoeJulian You're welcome.
20:28 MikeLupe daaamn - I finally got it (after 4 months?) ;)
20:28 hagarth joined #gluster
20:28 JoeJulian haha
20:28 MikeLupe do I have to, besides resync gluster vol, do something else?
20:29 MikeLupe Well, I must admit, I always tried for 1-2 hours and the I left it for 3-4 weeks...
20:29 JoeJulian Nothing. Once you've resized your brick, you're done. None of the gluster metadata needs touched.
20:29 MikeLupe really??
20:29 d0nn1e joined #gluster
20:29 JoeJulian really really
20:29 MikeLupe oh damn - that was a thing of not even 10 minutes...
20:31 MikeLupe I simply was "afraid" I would mess the gluster volume, if I simply extend everything - but obviously it _was_ that simple.
20:32 MikeLupe JoeJulian: One more thing - I don't even have to touch the third a1 node in that case, if I understand right.
20:32 MikeLupe I simply do the same with the 2nd node and basta?
20:33 JoeJulian Correct
20:33 MikeLupe omfg
20:33 MikeLupe sry
20:33 JoeJulian (assuming "a" means arbiter)
20:33 MikeLupe it does
20:33 JoeJulian (and "node" means "server")
20:33 MikeLupe aswell
20:39 arcolife joined #gluster
20:47 RobertTuples joined #gluster
20:47 derjohn_mob joined #gluster
20:48 hackman joined #gluster
20:58 johnmilton joined #gluster
20:59 roost joined #gluster
21:02 MikeLupe JoeJulian - sorry I must again express myself ... HURRAYY! Ovirt data storage domain flawlessly went up :) ahh....
21:03 JoeJulian Congratulations
21:03 MikeLupe yeah - one of those little "ups" in life
21:04 MikeLupe next time I'll bother the guys in the lvm channel
21:04 prth joined #gluster
21:04 MikeLupe well, there won't be a next time in this case ;)
21:04 JoeJulian They probably would have dumped you into #kernel
21:04 MikeLupe lol
21:04 MikeLupe ahh..thanks for that one
21:05 JoeJulian I like an easy win every once in a while.
21:05 JoeJulian And I don't mind helping. That's why I hang out in here.
21:11 MikeLupe For you that one was an easy win - for myself a huge win, as I struggled about possible dependencies to gluster...I was about to completely chicken out and add 3 additional servers....and the I got furious. And you help...so I'll have a nice sleep
21:12 MikeLupe the+n / you+r
21:15 kimmeh joined #gluster
21:17 hagarth joined #gluster
21:23 prth joined #gluster
21:54 hagarth1 joined #gluster
22:30 prth joined #gluster
22:57 johnmilton joined #gluster
23:25 johnmilton joined #gluster
23:47 MikeLupe nn
23:49 jeremyh joined #gluster
23:55 masber joined #gluster
23:56 johnmilton joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary