I'd love to hear about what interesting problems (technically or otherwise) you're working on -- and if you're willing to share more, I'm curious how you ended up working on them.
Thank you :)
Specifically, I spend a lot of time thinking about and writing embedded software. The aircraft is fully autonomous and needs to be able to fly home safely even after suffering major component failures. I split my time between improving core architectural components, implementing new features, designing abstractions, and adding testing and tooling to make our ever-growing team work more efficiently.
I did FIRST robotics in high school where I mainly focused on controller firmware. I studied computer science in college while building the embedded electronics for solar powered race cars, and also worked part time on research projects at Toyota. After graduating with a Master's degree, I stumbled into a job at SpaceX where I worked a lot on the software for cargo Dragon, then built out a platform for microcontroller firmware development. I decided to leave SpaceX while happy, and spent a couple years working on the self driving car prototype (the one that looked like a koala) at Google. Coming up on my third year, I was itching for something less comfortable and decided to join a scrappy small startup with a heart of gold. Now it's in the hundreds of employees and getting crazier and crazier.
About a year ago, I started spending more time researching about climate change. I learned how important energy storage will be to enable renewable energy to displace fossil fuels. The more I read, the more fascinated I became with the idea of building underground pumped hydro energy storage. I found a research paper from the U.S. DOE written in 1984 showing that the idea was perfectly feasible and affordable, but it seems that nearly everyone has forgotten about it since. (they didn't build it at the time because the demand wasn't there yet. Now energy storage demand is growing exponentially)
A year later, I'm applying for grant funding to get it built. I know that nearly everyone will tell me I can't do it because this or that reason. Because people don't like change and they're scared of big things even if the research shows it makes perfect sense. But I'm doing it anyways because no one else is getting it done. The idea is too compelling and too important to ignore. So here goes nothing!
The challenge is obviously scaling, since every municipality is different. For now it's going to cover my region and we'll see from there.
DNS is currently centralized and controlled by a few organizations at the top of the hierarchy (namely ICANN) and easily censored by governments. Trust in HTTPS is delegated by CAs, but security follows a one of many model, where only one CA out of thousands needs to be compromised in order for your traffic to be compromised.
We're building on top of a new protocol (https://handshake.org, launching in 7 days!!) to create an alternate root zone that's distributed. Developers can register their own TLDs and truly own them by controlling their private keys. In addition, they can pin TLSA certs to their TLD so that CAs aren't needed anymore.
I wrote a more in-depth blog post here: https://www.namebase.io/blog/meet-handshake-decentralizing-d...
While in college (CS & Math), I got heavily interested in growing food in the most efficient and healthiest way possible. I was a dreamer when I started so I thought more of how to grow 'earthly' produce on Mars, but then I realized that my own planet Earth is so massively underserved.
It's basically like this- I mastered growing leafy greens in indoor closed environmenet, then I tried to cover all the major physical and biological markers, then I try to optimize the most optimal levels of 5-6 variables (currently) that I can fully control and may produce the best phenotype- CO2, O2, Light, Nitrate, P, K. These parameters have their own sub definitions.
So far I have had great results. I am trying to raise investment so I can finally make it a reality. Check the numbers here: hexafarms.com (no fluff)
johndoe.shopify@daily.paced.email
johndoe.stripe@weekly.paced.email
johndoe.github@monthly.paced.email
At the end of each period, a single email is sent to the real email address containing all of the messages the alias received over that timeframe.I'd love to hear how you'd use it.
Across all platforms (not just Reddit), people including myself like to save/bookmark interesting content in the hopes of getting some use out of it later. The problem arises when you start accumulating too much content and forget to ever check that stuff out.
I'm working on a solution to help resurface Redditors' saved things using personalized newsletters. I'm calling it Unearth and users get to choose how frequently they want to receive their newsletter (daily, weekly, or monthly). The emails contain five of their saved comments or things and link directly to Reddit so that when viewing it, they can then decide whether or not to unsave it.
Basic functionality is all there, just needs some more styling and the landing page could be spruced up.
It's funny how we're all working from different definitions of the word "problem" - I'm certainly not changing the world with medical supplies for developing countries, renewable energy, payment systems and so on.
But it's something I'm really passionate about, and I'd be over the moon if I came anywhere close to the picture I have in my mind.
Back when I was studying German and Chinese, I would spend hours and hours on rote practice with little to show for it. My brain almost felt like it was on autopilot - the eyes would read the words and the hands would write the sentences, but the neurons weren't really firing. It didn't feel like I was properly building the synaptic bridges necessary to actually use those words in conversation.
On the flipside, after just 20 minutes speaking with a tutor, my proficiency would improve leaps and bounds. Being forced to map actual, real-world thoughts/concepts to the words/expressions I had learned - that's what made everything clicked. It felt like the difference between just reading a chapter in a maths textbook, and actually doing the exercises.
So after keeping track of progress in NLP and speech recognition/synthesis in recent years, it seemed like a logical time to start. Progress is slow/incremental, but it is there.
2.) Porting my Python code for nonlinear gradient driven optimization of parametric surfaces to C++. Includes a constraint (propagation) solver based on Minikanren extended with interval arithmetic for continuous data (interval branch and contract). This piece is a pre-processor, narrowing the design space to only feasible hyper-boxes before feeding design parameter sets (points in design space) to the (real valued) solver. Also it does automatic differentiation of control points (i.e. B-spline control points) so I can write an energy functional for a smooth surface, with Lagrange multipliers for constraints (B-spline properties). Then I get the gradient and Hessian without extra programming. This makes for plug and play shape control. I am looking to extend this to subdivision surfaces and/or to work it towards mesh deformation with discrete differential geometry so I've been baking with those things in separate mini-projects.
3.) Starting the Coursera discrete optimization course. This should help with, e.g. knapsack problems on Leetcode, some structural optimization things at work, and also it seems the job market for optimization is focused on discrete/integer/combinatorial stuff presently so this may help in ways I do not foresee.
4.) C++ expression template to CUDA for physical simulation: I am periodically whittling away at this.
We've solved scaling and reliability (we handle 20 billion API requests a month), and we're now focusing almost all our efforts on our data quality, and new data products (like VPN detection).
We're bootstrapped, profitable, and we've got some big customers (like apple, and t-mobile), and despite being around for almost 7 years we've still barely scratched the surface on the opportunity ahead of us.
If you think you could help we're hiring - shoot me a mail - ben@ipinfo.io
On the side, I'm an advisor to an impact investment foundation that is expanding their operations to East Africa. They're setting up an investment fund and accelerator programs to help companies tackle development challenges.
I'm also involved in a startup that is working to develop a new fintech app to create more data and access to credit for small-scale businesses in East Africa. It's a basic PWA app, not released yet, which has some real potential of scaling up and addressing some pretty substantial development challenges. (If anyone is really good with writing a bare-bones PWA based on Gatsby optimised for speed and low-bandwidth environments, please give me a shout).
I've had a weird career. Started out as a programmer in the late 90's, did my own startup in the mid 00's which was a half-baked success, moved to Africa for a few years and worked for the UN, moved back home and had kids, moved back to Africa and worked as a diplomat covering lots of conflicts in the Great Lakes region, moved back home again, worked for the impact foundation for a year and then rejoined diplomacy to do cyber work.
My rationale for starting this project was that I like specific features or facilities of many individual languages, but I dislike those languages for a host of other reasons. Furthermore, I dislike those languages enough that I don't want to use them to build the projects I want to build.
I'm still at a relatively early point in the project, but it has been challenging so far. I'm implementing the compiler in Crystal, and I needed a PEG parser combinator library or a parser generator that targeted Crystal, but there wasn't a PEG parser in Crystal that supported left recursive grammar rules in a satisfactory way, so that was sub-project number 1. It took two years, I'm ashamed to say, but now I have a functioning PEG parser (with seemingly good support for left recursive grammar rules) in Crystal that I can use to implement the grammar for my language.
There is still a ton more to be done - see http://www.semanticdesigns.com/Products/DMS/LifeAfterParsing... for a list of remaining ToDos - but I'm optimistic I can make it work.
- Relevant to you and your interests...
- ... but diverse enough to feed your intellectual curiosity
- Delivered in a timely fashion: apart from once a year big events, most things can wait for a few days, no need to require you to read the news every day
- Include some analysis to allow you to see the big picture
When I started a few years ago, I thought naively that a little machine learning should do the trick. But the problem is actually quite complex. In any case, the sector is ripe for disruption.
Started because of frequent multitasking heavy work with limited resources.
Open Beta (macOS) as soon as I finish license verification and delta updates.
- Getting a representation of a city that cleanly divides paved areas into distinct roads and intersections, and understands the weird multi-part intersections that Seattle has plenty of. [This](https://docs.google.com/presentation/d/1cF7qFtjAzkXL_r62CjxB...) and [this](https://github.com/dabreegster/abstreet/blob/master/docs/art...) have some details about how I'm attempting this so far.
- Inferring reasonable data when it doesn't exist. How are traffic signals currently timed? No public dataset exists, so I have heuristics that generate a plan. If I can make the [UI](https://raw.githubusercontent.com/dabreegster/abstreet/maste...) for editing signals fluid enough, would it be worth trying to collect this data through crowd-sourcing?
- Figuring out what to even measure to demonstrate some change "helps a bus run faster." Should people try to minimize the 99%ile time for all buses of a certain route to make one complete loop over the whole day? Or to reduce the worst-case trip time of any agent using that bus by some amount? Or to minimize the average delay only during peak times between any pair of adjacent stops?
- Less technical: How to evangelize this project, get the city of Seattle and advocacy groups here using it, and find contributors?
All because I can't get a 3DS game to load modified videos.
- initially, all manual
- secondly, timers - I know when some airlines do deals, so I go look
- thirdly, I found other sites indexing unusually cheap flights, but they're not always the same price on my site
- fourth, built a script to search my own site for a route, but the number of combinations rockets with the increase in date ranges. If you're taking different stopovers etc, it becomes ludicrous.
- it's growing at least, but finding ways to make it less hands on and less mind-numbing is a never ending quest. Although I still enjoy it :)
I've found a strategy I think believe will work -- my Leadership and the Environment podcast.
Here's the podcast: http://joshuaspodek.com/podcast
Here's an episode clarifying my strategy: https://shows.acast.com/leadership-and-the-environment/episo...
Here's my corporate strategy: https://shows.acast.com/leadership-and-the-environment/episo...
Working in my spare time on a command line terminal UI application that searches over source code and ranks the results.
It came about from watching a work college constantly opening VSCode when trying to find things in a codebase. I mentioned he should use ripgrep/silver searcher which he tried, but said he preferred to get more context and wanted ranked results. The context was possible using -A and -B but he didn't want that.
I had always wanted to make a terminal application and it seemed like an interesting problem to solve. I had also always wanted to implement BM25/TFIDF ranking algorithms myself and I was curious to see how well this could be done without pre-flighting and building an index.
Still a work in progress https://github.com/boyter/cs but coming along. Its usable now (with bugs) and is being used by my work mate.
Also, working on how to integrate a small team of hackers into a big team of production oriented engineers. Making the first of something is such a different skill set to making thousands more.
I got here by getting headhunted for a neat-sounding job after a project elsewhere ended, and then assuming more and more duties until my title had to change to match my responsibilities.
Also the usual stuff. Hitting the gym (30 min a day, 5x a week), clearing out junk I don't need anymore, multivitamin, etc. 2020 is going to be the year of wellness for me.
EDIT: Forgot to mention this, DELETE YOUR SOCIAL MEDIA APPS. All of them. Use the mobile websites if you need to read them. Not having the icons on my home screen or app drawer made all the difference and really helped fix my cyberaddiction.
I recently learned that homelessness is not just about the people you see on the street every day, but that homelessness is in fact a funnel that people fall deeper into as their situation becomes increasingly desperate. At the bottom of the funnel are the aforementioned group known as the "chronically homeless". The top of the funnel however, looks a lot different, it consists of people who might be couch surfing with friends, sleeping in cars or moving between motels. This group is known as the "hidden homeless". We likely encounter this group every day, at work, in the coffee shop, at the gym, but they look just like you and I so we fail to recognise their situation.
The "hidden homeless", at the top of the funnel, actually make up the vast majority of the homeless population. What's even more surprising is that this group overwhelmingly has access to technology, 90% have access to a smartphone or laptop with internet access.
The not-for-profit organisation I am involved with called Ample Labs (www.amplelabs.co) is working on developing chatbots to more rapidly connect this group with essential services. This allows us to get a better understanding of their behaviours, what services they use and how effective they are. This has two benefits - first by connecting the 'hidden homeless' with essential services quickly, we make it less likely that they will fall further down the funnel into chronic homelessness; secondly, it provides us with essential data that we share with cities to inform policy making.
The long term hope, is that by using data to prevent at-risk populations from falling deeper into homelessness we can combat the problem at its source and start to eliminate homelessness before it even begins.
Of course, there are plenty of national and state-level policy organizations; some even dip their toes in the municipal policy scene. But in the cities, most gropus are self-interested or focused on a single issue.
We're trying to fill the gap with original research and projects that operationalize the research of others -- taking, for example, good research and popularizing it, developing components for model ordinances, etc.
There's a ton of problems when you're dealing with food though. Calculating calories of a recipe you find online can be tough. On one side, it's a natural language problem to extract the ingredient, the amount, the unit, and the prep/notes. On the other side, it's a data/data matching problem, where you need good data on a ton of ingredients, and then need to pick a reasonable one for "1 cup of milk".
And of course everyone eats and prepares food so differently that suggesting meals they'll actually enjoy is hard without asking them a bunch of questions first.
I started this because I'm learning guitar mostly from YouTube, and I find myself constantly seeking videos to specific sections.
I'll probably launch the site on ShowHN soon. Feel free to DM me if you can think of other uses for this, or if you're interested to know when this launches.
That's fun.
The new system is a quiet simple SOA arch with a dull, only-real data db layer, backend in Go with code-generation, frontend in es6 migrating to elm.
That looks the IT guys have when we say 'no really, we don't need IIS or Java', its priceless :)
The interesting part actually lies in handling both product management and sales for the new version while handling the day-to-day coding part.
Sometimes I think I should write a book on those subjects :)
Short list:
* Pollution and the climate
* Privation
* Avoidable death
* Interplanetary settlement
* Liberty and communication
* Transportation
My primary focus is developing and commercializing reliable clean energy, because I believe that is the most effective way to further progress in the majority of the above problems.
To that extent, I've come to terms with an inability to spread my focus across all of them simultaneously and drive great results so instead I've taken an approach of working on a few of them full time myself and investing in efforts that work on others. My intent is to keep ~100% of my net worth invested in these main problems (either in my own or somebody else's projects) in perpetuity.
In my personal life, I've also recently been spending a lot of time thinking about health and purpose: how to build discipline, how people can/should decide what to do with their life, how to stay healthy and built fitness, etc.
Side project: in my free time over the last few weeks I've also been thinking more about how to create lasting models for information and media, and so I'm building a markup language / static site generator in that pursuit [2]
currently tracking is limited either by A) type of transportation (ships, rail, trucks) or B) by the Freight Forwarding company.
If you use multiple freight forwarders, you're stuck entering data from PDFs into spreadsheets to create your own custom usable dashboard.
If you use one freight forwarder, you have access to the main journey points, either as a spreadsheet, or if they're more sophisticated, through a web app. But I've only found one (silicon valley backed) Freight Forwarder [0] that gives the last-mile data -- e.g. last free pickup dates, pickup numbers, last free dropoff dates, return locations etc. -- through their web app.
This is critical for managing warehouse operations, especially for companies that handle their own last-mile (like we do), and it's been an absolute pain as we've scaled.
- a jdbc driver for interacting with google sheets
- a cross OS application which lets you share easily data
The first one is almost done and requires mostly documentation and some clean-up. It supports at the moment simple SQL queries like select * from, insert into foo() Values () and an update where I currently not remember the syntax. It also has already a Datagrip integration.
The second project shall work wireless and with minimal setup. The original idea was that devices search each other in the local network (via broadcast) and connect then. Further ideas which rised while development where:
- play sound on another device (which I initially thought would be super easy but seems like it is not)
- provide a possibility to define outside applications (like you provide a configuration file how I communicate with your application and this lets you show information on other devices)
- Not just device-to-device but also something like groups
- messaging with other devices
- more communication possibilites, ie via (outside) IP or Wi-Fi direct
Trying to fix uploading through tus.io (low level protocol) and uppy.io (user interface). Both open source and free to add to any project.
A humongous problem is the absolute lack of data that many psychedelic assisted therapists, guides, spiritualists have, to be able to point to their specific types of therapy as effective.
You come across folks that make humongous claims about the specific modalities they use, but don't track the progress of their patients and therefore don't have the data to prove it. We're working with volunteers at Tabularasa.ventures to develop some simple applications to both screen clients and also allow for practitioners or individuals to record progress (reductions in depression, PTSD, etc.) over time whether treating with microdosing, self administered, or more standard psychedelic assisted therapy (PAT) methods.
Happy to collaborate -> marik@tabularasa.ventures
We're basically trying to make an opt in service that can make procuring these relatively painless by grouping all relevant parties and then keeping these on record. A glorified KYC of sorts and then looking to use these as means of authenticating (I should be able to use my profile to sign up anywhere or transfer to my data (or parts thereof) to another party. Lots more to flush out but we have a good grasp of where we're going and what we want to achieve. Our government has tried to do this in the past but failed at getting it past the courts due to privacy concerns and are set to try again. I skimped on some details but the idea should be clear.
As well as new data protection laws introduced/proposed with more amendments to come, it's a simple but interesting problem at this point in time in navigating everything including how we do our own verification, security and eventual licensing to achieve the desired outcome.
https://github.com/moj-analytical-services/sparklink
It's currently in alpha testing, but the goal is for it to:
- Work at much greater scale than current open source implementations (the ambition is to work at 100 million records plus)
- Get results much faster than current open source implementations - ideally runtimes of less than an hour.
- Have a highly transparent methodology, so the match scores can be easily explained both graphically and in words (it isn't a 'black box')
- Have accuracy similar to the best products on the marketplace.
I arrived at this idea from two directions. The first direction is that I sometimes try to code review some of the questions over at CodeReview SE, and the whole thing feels unergonomic. I dislike scrolling up and down to check the code and constantly losing track of the things I'm reviewing. This is where I think inline commenting would help. Also, there is not a lot of room for discussion. You only get those comments below the review, where you only have a few characters to argue your point. The second direction is that I produce code snippets (programming homework, short snippets at work, etc.) which I would like to submit for review. I don't always want to submit it to the entire internet for review. I just want to get a private link to the code review, which I can share with my colleagues so they can review it. Kind of like a reviewable PasteBin.
Some of the features I would like to add: importing files from GitHub for reviewing and users could import their unanswered CodeReview SE questions for another review.
It's been fun to work around the constraints on an underpowered device. It's also an excuse to learn ARM assembly, and a nice break from all the JavaScript I've been spending my time in lately!
There are many research papers talking about it for many years, but till recently there was not cheap enough hardware available so it just get stuck in university laboratories.
It is still tough project to pull out as it combine hardware, cloud software, machine learning and there is quite some laboratory work required as well. Doing all of this as a single person and bootstraping is extra challenge but I guess I don't know better.
I've got into it 5 years ago, when I decided to quit technology, bought small farm and build winery. At first I wanted to analyze the wine itself, basically to make traditional method obsolete, but performance of this kind of instruments are not good enough for liquid that complex. It turned out grape analysis is much easier target to tackle.
The hassle of splitting proceeds from a service/event/product sale after the fact related to sending/collecting your % share, timing and details of wiring the proceeds.
Solution:
Pre-set allocations and create a customized checkout so that splits happen on a per-payment basis. Members dont have to wait to get their share.
Idea kind of came about after watching my wife, who is a yoga instructor/ studio owner try to split proceeds from a workshop she hosted with a few collaborators.
Another example: allows you to create a shield to a checkout that will split proceeds on a per payment basis.
https://github.com/surfertas/deep_learning/tree/master/proje...
Working on this on my spare time. Any advice from the community would be greatly appreciated.
http://aperocky.com/prehistoric
It's already got a pretty sophisticated production logic, and also a unified market.
Looking to add a few functionality like child support, new resource types and maybe eventually a governmental system. Can even try out different government strategies.
If you have any ideas please share. It's been my passion for 2020 so far.
Being a good hacker, I pulled at that thread until I had another, and another. Now I'm writing about semiotics, language, lambda calculus, and philosophy of science stuff. It's all related to my original quest for a better explanation, and it affects everything from AI to coding practices. I'm about done now. Now the trick will be getting it all in a format that's consumable by the average programmer.
It's definitely one of the most interesting projects I've ever worked on!
Having run a cybersecurity services business for three years and previously working for federal clients, I know that government and large banks are sucking the talent up, leaving fintechs two options: ignore security or overpay.
On the reverse side, there are lots of talented independent providers who simply need somebody to vouch for their skills. We meet with and vet everybody on our platform to make sure they have the capabilities.
Will be launching a prototype to replace this landing page shortly. If you're in the New York area and are either looking for cybersecurity contractors or looking for a project, I would love to get your input!
Most companies at one point are internally not aligned, marketing fighting product fighting development fighting design fighting sales.
All are wanting to contribute value, all hindering each other in the process of doing so.
The goal is one framework where a) an initiative can start from any group/team/individual within the company b) every other part of the company can rally behind - with their own expertise and point of view.
I always start with a talk (gave the first about it last Thursday https://jtbd.ws/), then I take it from there.
We've had a ton of great feedback from customers, and we are working on several new sleep technologies that we plan to release this year.
It's also been interesting to apply the lean methodology to hardware. Iteration cycles are long, but I'd argue that lean is just as important for hardware as it is for software.
We're at a very early stage and looking for investors.
I just like reimagining things, trying to elucidate first principles and go from there.
As we connect classrooms and scale across different countries, the problem set has grown exponentially.
There is an unlisted sample video in there that I've put out for feedback, and I'm making changes based on that. Will also be putting together related content around GCP.
p.s. do subscribe to the channel.
One reason I'm targeting parsers in particular is because I've been finding a lot of modern programming language books are a bit anti-parsing these days. EOPL avoids parsing altogether by using a parser generator, effectively saying that it's a hard problem. PLAI outright calls parsing a "distraction". SICP (not strictly a compiler book, I know) and Lisp in Small Pieces just use the triviality of parsing () languages, which I feel doesn't generalize well.
I emailed the author of PLAI (Shriram Krishnamurthi) about this. His response was effectively that modern books come off anti-parsing as a reaction to old books, which were parser heavy, and tools like YACC -- "Yet Another Compiler Constructor" -- even though it's just a parser generator, not a compiler constructor! He went further to say that, given parsing is roughly trivial in () languages, it sort of seems parsing is only incidentally a compiler/interpreter problem, and users of () languages view non-trivial parsing as signalling a design flaw. I found this to be an interesting take, but in my day job I generally don't have much say in the design of "languages" of semi-structured text that gets thrown my way.
Anyway, I know the Dragon Book covers parsing in some detail but for some reason it's been kind of impenetrable for me -- it feels a bit more abstract than I like. I can follow it, but while reading it I can't help but wonder -- "is this actually going to help me in practice?"
I recently have been reading Niklaus Wirth's stuff though, like the last chapter in his algorithms book and his Compiler Construction book, and those have been absolutely fantastic.
I also asked a question on SE about a particular parser I'm working on -- if anyone has some thoughts I'd love to hear them :)
https://codereview.stackexchange.com/questions/236222/recurs...
https://www.whoi.edu/press-room/news-release/whoi-awarded-1-...
Additionally I just won a grant at work to begin designing and building an open source underwater glider. Underwater gliders are one of the best ways of carrying instruments to sample the ocean. They can last 6+ months and be directed to interesting area's. The billion dollar companies that make and sell underwater gliders are focused on oil+gas+military business and are not giving the service, support or product depth the science community needs. They are in dire need of a tech refresh - they fail a lot for an old technology and run DOS. The only way we have a chance of understanding the ocean is to make sampling the ocean more affordable, reliable and accessible.
I’m starting with the sharing of common information with clients and partners. Organizations are often required to supply information on a regular basis to a wide range of clients and partners (bank account details, company, company registration details, tax clearance documents, certifications, charity registration number, etc.). A lot of these documents need to be renewed on an annual basis so there is a constant stream of requests for updated versions.
For bank accounts, the ability to verify a bank account automatically can prevent invoice fraud.
I’m looking at a model where a piece of business information is uploaded to a central platform and then provide permission for others to access it and to receive notifications when a new version is available.
In the first startup school batch for 2020 and working on validating the problem with actual users.
From our experience the biggest issue students have is, they can't solve an issue because they didn't understand a concept they have already "learned" in the past. It's simple, yet powerful.
At the Moment, I'm fighting with a monolithic, untouched Java 8 / JavaEE6 service which has lots of old dependencies and that uses old cryptographic ciphers, some of them classified as unsafe (e.g. brainpool512p1).
None knows how to make a reproducible build, since everyone gets a different and working or not-working package and some modules are not even released (using the infamous -SNAPSHOT) in maven and there's no documentation. Unfortunately, there's little testing, so everything can be broken easily and none can know it.
Some developers are also really undisciplined, touching code but not running end-to-end (manual) testing, not even running the installer.
If I had the decision power, I would throw this thing away and start from scratch, probably without Java too or, if Java, at least the latest one and maybe Spring, not JavaEE: Wildfly moves too fast and each release breaks compatibility with the previous one, concerning settings (RedHat: why do you do this??)
As a fun project, as I already had code to generate llvm bitcode from .NET, I now do mem2reg (convert stack spots to SSA registers), dead code elimination, constant folding and other small optimizations. That part now works, and I managed to create a simple x86_64 coff object file (with everything needed to link to it, including pdata, xdata) that returns the "max" value for a given integer.
That is about all that works for now, and I don't get to spend much time on it, but the end goal is to have a "good enough" codegenerator for non optimized cases, that could potentially be faster than llvm (to emit). The primary goal is to learn how to do this though :)
Having previously worked at a marketing company and a startup, it’s been fascinating to experience a legacy manufacturing company growing (or trying to grow) into the future.
Yes, the engineering problems are fun and all, but I think the most fascinating part has been thinking about what American manufacturing will look like 5, 10, 20 years down the road.
In my experiences, I believe American manufacturers will NEED to invest in industry 4.0 tech in order to mitigate costs associated with rising wages, shortages of skilled machinist labor, and greater demand from consumers/regulators/OEMs for information and transparency.
I’ve also been quite amazed at how much paper is still used and the lack of industrial software products with quality UX.
And I don’t think American manufacturing will ever cease to exist.
I have quite a few domain names that I have purchased over the years that I am not doing anything with at the moment.
I wanted minimal amount of work to make a good use out of these domains.
So I built Newsy. It turns your idle domain into a news aggregator.
I’m nearly there. You can sign up and I’ll invite you to check it out!
IMO the current options are too complicated or expensive and appropriate for the largest companies. I cannot hack a simple application for data discovery or usage statistics. So I am building a dead simple data catalog that I can reuse. The data lineage app is the first app on it.
(1) https://github.com/tokern/piicatcher (2) https://github.com/tokern/lineage
Launching soon.
During a sabbatical trip in the Canadian Arctic in 1975, I came in close proximity with a beluga in Hudson Bay and was impressed with the unusual vocalizations which was in-air and about 3 feet from me. The beluga was tragically killed by Inuit a few minutes later. That's how my interest started. I later learned the basics from two people who were leaders in this field.
Hopefully in 2020 this means more than simply resurrecting a clone of MS Office's Clippy...
Increasing food safety and security plus availability and choice in urban environments using robotics to automate food preparation and software to manage operations and logistics. Hopefully also make money. Differences from web stuff: includes embedded, mobile, electronics design, mechanical design, fabrication, business, cross-border operations, food safety regulations, etc.
When signing up for services that require real identities (banking, insurance, etc.) the standard currently is to require a picture of a passport, a video of yourself, or copies of some paperwork. These methods are all high-friction and provide dubious security and privacy. This is already a solved problem in some countries and I'm working on the equivalent on a larger scale, without the geographical restrictions.
If there is anybody else here working in this space then feel free to reach out!
- a graph-based task manager that incorporates dependencies between tasks and infinitely-nested subtasks - IE maps to how we actually think about tasks being related and broken down. Aiming to get this one shared with the world in early Feb.
- a visual programming environment that represents how we model software in our heads, not how it runs on the computer/s. This is my longer-term, much more experimental project.
Drop me an email (in my bio) if you're interested in either! I'll be commercialising the former quite soon and I'm putting a lot of effort into pleasant to use.
https://github.com/ikorb/gcvideo
GCvideo has a way to convert the digital signal on the N64 into composite video, and has VHDL to create an HDMI signal With Audio. So I have been working on finding the digital Audio out on the N64, and converting the whole signal to HDMI.
In not so many words, I am recreating this from scratch: https://www.retrorgb.com/ultrahdmi.html
Mainly because it is impossible to find that.
I built a similar tool internally at my last company and we used it to alert on things like employees making google drive files public to the internet, okta configuration changes, github ssh deploy keys getting added, employees logging in from foreign countries, etc.
If anyone wants to check it out you can reach me at arkadiy{at}logsnitch.com (or just sign up at the same domain).
Been a really tough journey. I’m was the only coder and designer in the project for the longest time, and my development skills weren’t really that good when I started building this.
Here’s a link to it https://nyxo.app
There are some interesting value-judgements that have to be made here (e.g. do we value the consumption of future generations more/less than present consumption?), so I suspect there will never be an objective answer to this question.
Funny thing is, the basic software more or less exists already. At least on the fulfillment and logistics side of things. The tricky thing now is to create the physical network (also companies like DHL ship for everyone, even next day) and come up with the processes to match n retailers with m dropshippers (some of them shared between retailers) and a basically infinite amount of consumers.
I said long shot. First step is to get my 4PL company of the ground. A 4PL is a nice first step, I tke care of daily logistic operations for clients. Which also includes pretty early on a Dropship component. So once the 4PL is earning some money, the next step will be to define processes for a scaleable Dropship solution, identify software gaps and then create the platform. Talking about longshots...
How I got the idea? I worked for Amazon running, among other things the Amazon.de dropship network. After that I worked for a producer of solar modules. That company sld some of the modules through a webshop and had some dropshipping. Totally inefficient, intransparent and expensive. So I told myself, that can be done better. Took me three years to take the leap into startup world.
there's a lot of "greenwashing" in the industry driven by opaqueness and lack of measurable data.
step #1 is to get more brands on board. step #2 make it easier to monitor supply-chain and have actionable and measurable KPIs built around data.
I've been reading about this for years and recently started sending out short summaries of what I've learned (typically geared at how the lessons can by applied practically).
Last week I shared how Nobel laureates are 22 times more likely to have a side hobby as a performer than their peers.
Ultimately, I am trying to land on a succinct answer to "how do you channel broad interests and talents into an impactful career?"
(this is my email: https://stewfortier.com/subscribe)
2. At work, I recently completed a really long project with a large team. I'm trying to make the lessons learned accessible to others in the company because they'll also be undertaking similar projects soon. That means documenting my learnings at a level of abstraction that allows others to not make the same mistakes as us, but still have enough flexibility to tailor their implementation based on their team's needs. The hard part is the intersection of technical and people-oriented knowledge dissemination.
This year is going to be focused on a lot of teaching, which I'm excited about.
https://datasetsearch.research.google.com/search?query=whole...
Teaching machines to diagnose cancer with superhuman sensitivity and specificity makes it easy to sleep at night.
You call an endpoint anywhere on the planet and give the name of the service you want, which then gives you access to that service's published API (similar to how you'd use import and gain access to a library's API).
To start, it will operate over port 80/443 to allow seamless integration into the current world infrastructure, but I'm also hoping that in maybe 10 years it could replace HTTPS entirely, possibly even TCP.
The first step is an encoding mechanism that supports the most common data types natively, which I've defined here [1], and am currently writing implementations for in go. It's a parallel text and binary encoding so that we don't waste so much time generating bloated text that's just going to be machine-parsed at the other end, but also allows converting on-demand to a text format that humans can read/write. I ended up developing new encoding schemes for floating point values [2] and dates [3] to use in the binary format.
The next layer above that is a generic streaming protocol [4], which can operate on top of anything from i2c to full-on HTTP(S), and supports encryption. It's designed to be as non-chatty as possible so that for many applications, you simply open the connection and start talking without even waiting for the other side's acknowledgement. It supports bi-directional asynchronous message sending with chunking and optional acknowledgement on a per-message basis, with tuneable, negotiable message header size.
The final layer will be the RPC implementation itself. I want this as a thin layer on top of streamux because many of the projects I have in mind don't need full-on RPC. This part is still only in my head, but if I've designed the lower layers correctly, it should be pretty thin.
[1] https://github.com/kstenerud/concise-encoding
[2] https://github.com/kstenerud/compact-float
You host a copy of my web application, and it handles all your user account stuff with modules that add organizations, Stripe Subscriptions and marketplaces powered by Stripe Connect. You write your application with its own web server in whatever language and the two servers form one site.
At the moment I am trying to finish automating my documentation based on the test suites including API details from API tests and screenshots from UI tests.
I am looking for testers if you are building a SaaS or a Connect marketplace.
Whenever we begin to do something, our computer just sees a bunch of apps and windows. It never tells us how to get better or does things on our behalf. At Amna, we’re working on a natural interface structured around the way people think. We believe it will change the way you interact with computers, and the way computers learn from us.
full problem: https://getamna.com/blog/post/amna-solves-problems/
In 2020, we plan to continue the VR Roadshow and brainstorm new ideas to bring more awareness to virtual reality tech.
Professionally - change IoT into one big robot, make platform to connect ALL devices with one system, essentially what Bruce Schneier warns us about[0].
[0] https://www.schneier.com/blog/archives/2016/02/the_internet_...
Hence I started a website to curate cool & impact projects that people are building that nobody knows because they are small or unknown (yet). So kind of discover amazing companies / make impact kinda thing. Hoping to launch this month.
I have build a VERY basic landing page but I am struggling to get time to spend on it.
I have to say that the technical challenges of bringing in modern web technologies to interface with legacy systems has been an interesting (and frustrating at times!) experience. After working as a software dev for a number of years before taking this on, I’ve been jumping between sales, marketing, devops, management, and actual software development all in a day.
The implant helps patients perceive visual information about their surroundings.
Pretty cool tech and fun to work on, too.
So, for example, the input is 'nytimes.com' and the output will be the last headlines.
Plan to release it in a few weeks.
1. Heatmaps based on all popular gradient based explainable AI techniques (plus our own) for classification, regression and semantic segmentation tasks.
2. Uncertainty modeling for classification, segmentation and regression tasks.
3. Concept Discovery/ Pattern Discovery (and dependence) for patterns learned within a deep neural network. (Loosely based on TCAV)
4. Using network internals for optimal pruning and model compression.
Send us an email at sales@untangle.ai if you’re interested in trying out our toolkit. We offer 30 day free trial period.
It might not be as impressive as some of the comments, but it does seem like something the market is needing.
I'm also working as a contractor on automated valuation systems for real estate properties, mainly for the argentinian market. The company have already sold the service to a big international bank to periodically update their mortgages.
And now I'm pondering about starting a research+prototype AI consultancy.
https://github.com/ityonemo/zigler/
On my plate currently: Figure out how to make a long-running zig function run in an OS thread alongside the BEAM so that it doesn't have to be dirty-scheduled.
- how to do digital identity in health and public services for ~15m people
- replacing enterprise/waterfall security risk assessment with collaboration and iteration.
- applying product management methods in the public sector
https://www.instagram.com/starshiprobots
Technology and business do work, so we probably will have thousands of robots within a year, and millions not long after that )
The hardest part has been deciding what to fight first and meeting other people who have experience working with algae. I would love to connect with anyone that wants to talk Algae!
I create computer models of water networks and calibrate them so utilities can do what-if and growth scenario planning. (e.g. what happens if this pipe burst? how would the network cope with 20k new houses in 40+ years)
I'm also developing software to help water engineers build and run models, some of it opensource and some of it commercial.
I'm currently pushing most of my effort into an opensource javascript library to simulate water networks.
Since the type information is erased at compile time, it uses the compiler API to extract the data needed and generates TS code for the interface and constructor mappings.
The library is on GH, but not really much to show. I've posted on /r/node and it got some positive reactions, but it didn't got that much attention.
My fun stuff at the moment:
1. Learning Windows IoT on a Raspberry Pi 3B
2. Working on proof-of-concept Search Engine Indexers for specific datasets and/or local file-systems (on network servers).
3. Exploring a new paradigm of allowing people to easily publish train-of-thought type content without having to post a long series of tweets or silo it inside Facebook/LinkedIn/Gist etc.
Business listings are fairly sparse in some countries. Many owners do not bother creating even a google maps profile and just rely on word-of-mouth for new clients. Acquiring the bottom of the data-iceberg will require some creativity going forward.
The problem this solves is sharing. Sharing between devices/users should be as simple as copy/paste initiated by either user like everything is local.
But its more than just that. I want to take the material and make it more entertaining as well as educational.
I am observing that more and more kids are learning things on their own by just going online and searching for videos on how to do X. We are on the cusp of online learning overtaking traditional in classroom learning in terms of quality and presentation.
I'm happy that it works well for many small sized problems.
https://bustl-app.com - A SaaS product that acts as a personal assistant that will integrate with a range of different apps.
What I can't figure out is what to use as inputs, similarly to the human senses, so that it doesn't become too specific, i.e. weak, but instead remains general and able to understand the binary language computers use.
Predictions are a critical part of decision-making, and it's possible to improve – see, for example, Philip Tetlock's work. But that requires the right tools, which we are building: https://www.empiricast.com
I'm mostly heads down coding every day, building an MVP. Also trying to find some investor interest where possible, however fundraising has never been something I'm good at.
While this sounds like a complainy pants problem, this isa very real problem for a very large percentage of the United States. Without a 4-year degree as a de facto dues card, you are severely limited on your options.
At 34 years old, I could maybe have a degree by 40 while working full time and have to take on 30-60k dollars of debt to be competing for entry level jobs against 20-22 year old applicants (many schools now have programs so students can graduate simultaneously with a high school diploma and an associates degree). At my current income, if a degree could get me an extra 15% within a year of graduation, I would be in my 50s before I paid the loans off at current rates. That means I sacrifice the last half of my 30's to break even in my 50's and maybe make some extra money in my 50's and 60's losing out on 20-30 years of compounding interest because I don't have that arbitrary degree in anything as a dues card to say I'm worth hiring/promoting.
Blah.
Last year I made about 10% less than the year before because of zero overtime, our annual merit-based increases often are break even (sometimes not even break even) once you factor in inflation and insurance cost increases, throw in the constant nagging pressure of cancer risks (father died of it, mother had it, father's mother died of it), climate change, international trade issues which could see me laid off, automation possibly replacing jobs in the near future, it can often be quite crushing. Especially when you're trying to maintain sobriety and just want to run off into the woods with a cask of high proof alcohol and try and befriend a bigfoot to help provide food and shelter for you so you can die from Lyme disease or exposure living as a refugee in Bigfootville.
Meanwhile you see people with YouTube channels buying what equate to mansions (What's Inside, Jenna Marbles) and taking international trips monthly (What's Inside, Casey Neistat used to, on aircraft with seats in the tens of thousands of dollars a flight) and even crazy domestic trips frequently (What's Inside) and you're like, "Dude, I just want to make more than 34k a year".
I truly can't imagine what it is like for people that are consigned to working fast food/retail/service jobs as their sole source of income. It has to be all but crippling.
I've had too many friends and family members end up at companies that were not a match and watched the massive stress pile up. I want to help people find the right team/culture for them.
I am working on creating a solution that gathers the data normally not seen in console dashboard and discovering actionables that help the user.
It's an interesting problem, requiring both dexterous manipulation and long-term planning. It's also compositional, so I believe some form of hierarchical control and planning can solve it.
www.clvrai.com/furniture
2. An all-encompassing personal knowledge management solution that is effortless and universal.
Why am I still using pen and paper for math homework? Why do I have to rewrite the whole friggin thing every step?
And there's a hope that whatever I learn might be useful for lisps, too.
1. 4-manifold problem, while you can see it should be the surface volume of the shape is equal the math proof is the impossible rub. 2 Prime number generator
This can lead to ai-suggested interventions that people can apply to themselves or to support someone else.
It's a complex problem, but there must be a better way of doing things!
Sounds easy...
But it's the most difficult thing I've ever tackled. Even considering I've read books like water since I was a kid.
released recently as my first app in google play
https://play.google.com/store/apps/details?id=com.due.core&h...
It is a next generation carbon offset marketplace.
We help you avoid distractions while working on websites and the internet by installing our Chrome extension. For example, Baitblock removes recommended videos on YouTube while you're working.
It also deals with 1st party cookie tracking. It clears cookie/storage on every page load as long as it detects that you're not logged in to the website (upcoming version removes many bugs) using machine learning (NLP).
Since there are too many cookie/gdpr popups now a days, Baitblock automatically hides them while you're working.
You can also add summaries/TL;DR for any link on a website (right click) so others dont have to click.
The end goal of Baitblock is to block all possible distractions in a webpage and save everyone's time.
The latest version of Baitblock 0.1.0 is awaiting approval with many fixes and new features.
Lots of webscraping.
My Open Collective page (I’m ballin’ on a slim budget here) https://opencollective.com/techfins
You can see more of my fins effort on my instagram page, @stormfins
I recently decided to use the techfins name instead.
I ended up working on this thanks to a passion for surfing, and knowledge that new airfoils could radically improve my surfing ability by augmenting my surfboards’ capabilities. I learned CAD four years ago just to do this - make fins based on new airfoil templates. This ‘new class’ is essentially high lift fins. Compared to the current surfboard fin standard (6mm thick fins), my 16mm thick fin designs provide radically more drive, traction and stability to surfing at the lower speeds. Making normal to pumping surf more accessible and enjoyable to novices and experts.
These fins are not only empowering for surfing ability, they’re also safer because of their thicker, more rounded edges, and when 3D printed the fact they break before your skin does. Also, if these fins could be fitted with an internal floatation during printing, they could be recovered and glued back into place using automotive plastic glue.
These are also literally the first high performance Wavestorm fins to be created as well. Anything else out there is a boring, simple fin design.
Some past feats I can be proud of that you all may appreciate:
• created BeelineReader.com’s first working app, helping them get off the ground. Helps you read much faster using a novel, internationally patented innovation.
• Created a web browser with T9Space.com that empowered 10s of thousands of Nokia phone users around the world to access the desktop only internet back in ‘07-‘10
Earned a bachelor’s in CE at UCSC 20 years ago.
My moniker thirdsurf is about a P2P ‘school of surfing’ project I want to get going next. There needs to be a more dynamic connection between those with the knowledge and those who’d like to learn.
If this fins thing gets off the ground, it’ll open up other possibilities. Nucleos.com is an example of a company that’s been developing a cheap school software server that can operate in developing world conditions. I want to see 3D printing fin labs sprouting that use a computer that can serve out edu apps to anyone nearby who has a wifi device.
I also aim to set up a P2P market for people who could fabricate the digitial fin designs I’ll be making available. This could open up a market of innovation, empowering people to tout novel materials and fabrication methods, helping advance greener, safer and more economical ways to make these fins, and other goods.
Also, in places where it may be difficult to obtain filament, the machines https://preciousplastic.com/ and others are developing could turn plastic refuse in devloping world areas into a precious commodity.
According to bankrate.com over 60 percent of millennial home buyer regret their home purchase. SHOCKING! In our industry this stat is overlooked and it has made me wonder why. I was working with someone who's lease was coming up and wanted to explore buying a home. They had two choices, rush into a home or wait another year. They decided to move and lease in the area they wanted to purchase. Being unfamiliar of the housing market after signing a 12-month lease there is no way to identify future homes that will come to the market when their lease is up. This story led us to a problem in our market. To avoid the traditional move-in ready market in less than 30 days, our users can match to homes that will be available for sale at their expected time to purchase. This is a perfect way to build confidence and prepare for the journey head.
Let's keep it 100 percent! We don't have access to who is planning for the future which could produce a better outcome. As a real estate professional over 10 years I’ve noticed soon after buyers move into their new home better homes in the neighborhood were listed for sale around the same price. We often times think we purchased the best home at the right time, but everyone has a plan and a price,That’s real estate right!
Would you be willing to wait if you found the home worth waiting for or made the seller an offer they can’t refuse for your one of a kind? Homematchx assures you never miss out on wishing you could have purchased the home next door, the home across the street, or the home around the corner. Our platform allow consumers to see available homes for sale up to three years out giving you access to more inventory than today.
I think seller are at a serious disadvantage when they need to sell. Days on Market is a huge issue and its a growing concern in real estate. They are unsure how many buyers fit their home's description and would be willing to purchase it at their desired time. To time selling your home perfectly is unpredictable. Our platform allows you to see all the buyers, their compatibility to your home, and if they have been qualified or not. Never will a seller list a home for sale without know who will actually purchase it.
We are heading into the new construction industry to help home builders better understand the real estate market and who's available. There is so many missing things that buyers don't have access to in order to time their new construction journey perfectly.
I'm excited about the many problems we can solve but I know we cannot be successful without the users knowing it exist. I'm on a Godly mission to finally change the real estate market and make it accessible regardless of your timeline.
Stephen L.
Biggest challenge has been speed: first proof of concept was a prototype that was kinda' slow (C#/WPF/Windows). I've re-written it using the lowest level possible stuf from WPF, and that took me a looot (roughly 3-4 months, to also make it easy to extend/modify). That was an improvement of roughtly 3-4 times, but for non-trivial stuff, it was too slow (and especially saving the final video was insaaaanely slow). So, I did another rewrite in UWP, and this took another 4+ months.
Now, I'm really happy about the speed - it's 3-4 times faster than before, and at saving, it's 10-12 times faster.
In order to make it happen, I've worked insane hours (and still am) - but that's that. Right now (the following 2 months) I'm focusing on stability and some improvements. Hope to have apretty cool new feature ready in roughly 3-4 months, and we'll see.
Challenges: countless, probably I could write a book ;)
1. Parsing existing videos - in WPF that was insanely hard, and it took me a lot of time to come up with a viable solution (which when porting to UWP, I ended up throwing away)
2. Estimates - I was pretty good at estimating how long a task would take. But due to the fact that everything was new to me (basically, animating using low level APIs was close to undocumented), so pretty much everything took 4-5 times more than I expected. This was soooo exhausting and depressing, since at some point I just stopped estimating, because I knew it would take me longer.
3. Changing the UI due to user feedback - basically, I ended up redesigning 80% of the UI to make it easier to use. What I thought would take me 1 week, ended up taking me 1+ months.
4. Tackleing everything at once: trying to implement a new feature, while dealing with bugs people would find or dealing with issues that would come up when trying to implement the feature. And dealing with issues that came up from the photographers I collaborate with (those that create the app's effects/transitions).
5. Porting to a new technology (UWP/WinRT). This is something that I hope I never have to do again - I was forced to do it, because of the speed gains. I had to reimplement / retest every control I initially developed - that's one thing. The other one is dealing with the idiocracy of WinRT - which loves async stuff / and also loves limitations. Also, the UWP documentation is soooo bad compared to WPF - and there are very few resources, because most people are put off by it (not going to go into detail as to why, that's another book I could write). Not for the faint of hearted. 6. Compilation times - on the old technology (WPF), everything was insanely awesome. On UWP, compilation times are roughly 6 times slower. That is baaaaaaaaad. I'm doing all sorts of workarounds to make things faster.
It's based on a university project on which I was working basically since day one in 2006.
I know it's crazy to work on such a large project initially alone. Lately, however, I'm getting the first contributions, and maybe I should start collaborating with the university or with the company of my former supervisor (who began the project for his Ph.D.).
I'm now more than convinced that the ideas are worth to work on, especially in the advent of modern hardware as byte-addressable NVM :-)
Currently, I'm working on the storage engine itself, to reduce storage space consumption further and to make the system stable. I'm experimenting with larger data sets to import (JSON and XML currently up to 5GB) with and without auto-commits, enabling/disabling different features, for instance, storing a rolling merkle hash for each node, storing the number of descendants, a path summary and so on.
Some of the features:
- the storage engine is written from scratch
- completely isolated read-only transactions and one read/write transaction concurrently with a single lock to guard the writer. Readers will never be blocked by the single read/write transaction and execute without any latches/locks.
- variable-sized pages
- lightweight buffer management with a "kind of" pointer swizzling
- dropping the need for a write-ahead log due to atomic switching of an UberPage
- rolling merkle hash tree of all nodes built during updates optionally
- ID-based diff-algorithm to determine differences between revisions taking the (secure) hashes optionally into account
- non-blocking REST-API, which also takes the hashes into account to throw an error if a subtree has been modified in the meantime concurrently during updates
- versioning through a huge persistent and durable, variable-sized page tree using copy-on-write
- storing delta page-fragments using a patented sliding snapshot algorithm
- using a special trie, which is especially good for storing records sith numerical dense, monotonically increasing 64 Bit integer IDs. We make heavy use of bit shifting to calculate the path to fetch a record
- time or modification counter-based auto-commit
- versioned, user-defined secondary index structures
- a versioned path summary
- indexing every revision, such that a timestamp is only stored once in a RevisionRootPage. The resources stored in SirixDB are based on a huge, persistent (functional) and durable tree
- sophisticated time travel queries
Besides the storage engine challenges, the project has so many possibilities for further research and work: - How to shard databases
- Query compiler rewrite rules and cost-based optimization
- A brand new front-end
- Other secondary index-structures besides AVL trees stored in data nodes
- Storing graphs and other data types
- How to best make use of modern hardware as byte-addressable NVM
[1] https://sirix.io or https://github.com/sirixdb/sirix
Anyway...
This summer & fall I wrote a JS core lib and a set of compatible packages that together greatly simplify the creation of terminal based node apps and games (in the realm of blessed, blessed-contrib and ink, but with no dependencies and with a novel api/architecture)
I got into it because my son did this node project where an animated car drove in a forest of cellular automata generated trees. Yah. You read it right. Things spiraled from there...
It is not a small project and it is pretty close to release form. I’ve used the lib and components to write a couple of small but non-trivial things. So, yes, it works.
In December, though, I stopped actively working on it. There are various reasons. One of which is that there is snow on the mountains. There are other reasons, none of which is code related.
More curious? More question. Cheers.
Quite a long story short, I managed to get the mental illness under control; something I thought I'd be living with the rest of my life.
My research has included mostly standing/walking meditation, and reading a lot on philosophy, religion, psychology, and such.
This is a personal project I've only just sort of revealed, after some persuasion by my peers. I didn't really have much intention on putting out in the public but it has turned out to be something significant. There is a lot to say about it.
EDIT: If you're curious, here is what I came up with after I started recording my research. DISCLAIMER: there is some personal stuff I talk about.
Google query
"base of the natural logarithm e"
reports
e = 2.718281828459
that is, 13 digits.
The calculator with Windows 10 reports
e = 2.7182818284590452353602874713527
that is 32 decimal digits.
Last weekend found
e = 2.71828182845904523536028747135266250
that is, 36 decimal digits.
The math and code are below and could just as easily get e to, say, 500 decimal digits!
How'd that happen?
Last weekend worked on some short but relatively careful notes to get a nephew of 9 started on calculus, and part of that was Taylor series in just two pages with large fonts!
The code and the core of the Taylor series derivation are below.
In TeX, Taylor series is
f(x) = \sum_{i=0}^n {(x - x_0)^i \over i!} f^{[i]}(x_0) + R_n(x_0)
with R_n(x_0) as the error term.
To derive the Taylor series, really just find the error term
R_n(x_0)
and for that just differentiate f(x) with respect to x_0 where then nearly all the terms cancel, simplify, integrate from x_0 to x, and apply the mean value theorem. That's all there is to it!
The results are, for some s between x_0 and x:
R_n(x_0) = (x - x_0) {(x-s)^n \over n!} f^{[n+1]}(s)
As above, the final output of the code:
e = 2.71828182845904523536028747135266250
From R_n(x_0) the error is less than
3 x 10^(-40)
The numerical output of the code is curious: Get a little over 1 decimal digit of accuracy for each term of the series! So the output shows two big triangles, one for the values of n! and one for the number of correct digits in the estimate of e.
A key to why this code is so simple and works so well, Kexx can do arithmetic with 1000 decimal digits of precision!
"Look, Ma, here's the code -- dirt simple":
macro_name = 'NATLOG'
out_file = macro_name || '.out'
'nomsg erase' out_file
Call msgg macro_name': Find natual logarithm base e'
numeric digits 1000
n = 35
sum = 1
factorial = 1
Do i = 1 To n
factorial = i * factorial
sum = sum + 1/factorial
Call msgg Format(i, 5) Format(factorial, 50) Format(sum, 2, 35)
End
error = 3 / factorial
Call msgg macro_name': The error is <='
Call msgg Format( error, 59, 50 )
Call Lineout out_file
Return
msgg:
Procedure expose out_file
Call Lineout out_file, arg(1)
Return