NThe Prayer Network
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Towards trust in Emacs (eshelyaron.com)
accelbred 5 hours ago [-]
The one problem I have with the trusted files thing is that I have no way to trust non-file-visiting buffers. Why is *scratch* untrusted!? *scratch* should always be trusted, without me having to configure anything, ideally. Though a setting to automatically trust non-file-visiting buffers would be nice. I just ended up stopping using the scratch buffer because of that issue.
eshelyaron 2 hours ago [-]
Right, the fact that the initial scratch buffer is untrusted is a bug AFAICT. I'm considering adding a workaround to this issue in trust-manager, although ideally it should (also) be solved upstream.
pkal 41 minutes ago [-]
Shouldn't something like this fix the problem, at least for scratch buffers:

(add-hook 'lisp-interaction-mode-hook (lambda () (setq-local trusted-content :all)))

eshelyaron 7 minutes ago [-]
Pretty sure that's unsafe, don't do that.

Only the scratch buffer is to be exempted, not every buffer that gets this mode.

TheChaplain 3 hours ago [-]
> ... the problem with security measures that cause too much friction is that users tend to disable them in order to get on with their work. To fulfill its security purposes, a good trust system needs to stay out of your way.

I wish this was understood clearly by more security engineers, but, alas...

6 hours ago [-]
quotemstr 4 hours ago [-]
The trust model of Emacs makes no sense. It's overly conservative, hurts the development experience, encourages blanket permission granting, and worst of all, sins against logic and lisp themselves.

Macro expansion is data transformation. Form in, form out. Most macros are pure functions of their inputs. Even the ones that aren't seldom have effects that would allow exploitation. That's because a well-written macro does not have side-effects during expansion time, but instead generates code that when itself evaluated, has the desired effect.

Yes, in general, for arbitrary values of "macro" and "form", using a macro to expand a form leads to arbitrary code execution. This much is true. But the risk only manifests when both the macro and its input form are untrusted.

The vast majority of macros are dumb pure functions and do not perform dangerous actions on untrusted input. It is safe to use these macros to expand untrusted forms. Doing os would make flymake, find-function, and other features work correctly in most cases. To blanket-prohibit expansion even by macros doing obviously safe transformations is to misunderstand the issue.

At a minimum, it must be possible to define a macro and mark it safe for expanding untrusted code. Yes, it's prudent to have a whitelist and not a blacklist. Right now, we don't even have a whitelist. All macros on any untrusted form are deemed unsafe. That's too conservative.

Beyond that, it would be safe to run the macro-expander itself in an environment without access to mutating global operations. Since almost all macros are intrinsically safe to expand, we'd have far fewer situations in which people had subpar development experiences from overly conservative security mitigations.

In addition, after I've eval-buffered a buffer, then Emacs should perform macro expansions in this buffer, at least until I revert it from disk. If I have evaled a malicous buffer, I have already accepted its malice into my Emacs and expanding macros for find-function can do no more harm.

7 hours ago [-]
shevy-java 2 hours ago [-]
"Up to version 30, it didn’t differentiate between trusted and untrusted files, and in effect treated all files as trusted."

Age verification aaaaaand Trusted Computing now! \o/

(Just kidding - have to point at the question of what trust is exactly. Because I can not accept the "trusted files" claim; I don't think anyone can ever trust anything, unless there is some really objective criterium that is unchangeable. But if something is unchangeable, can it be useful for anything? Yes, you can ensure that a calculator would correctly put a given input into the correct output, or a function to do so, but in real calculation this is not the only factor to be guaranteed, not even in quantum computing. What if you manage to influence the calculation process via light/laser information or any other means? I can't accept the term "trusted" here, because it implies one could and should trust something; that is a similar problem to the term AI - I never could accept that "AI" has anything to do with real intelligence with the given hardware, it is just a simulation of intelligence; pattern matching and recognition only makes it more likely to produce useful results, but that does not imply intelligence at all. It lack true understanding - that is why it has to sniff for data, to improve the mapping of generated output. One can see this on many AI-centric videos on youtube, the AI is often hallucinating and creating videos that are not possible, e. g. suddenly a leg appearing in motion that is twisted in the opposite direction. That shows that the AI does not understand what it is doing. Any human could realise that this is physically just not possible. I see this on cheaper AI videos even more, e. g. chuck norris videos where chuck would kick everyone yet the motions are totally wrong and detached from the "real" scene.)

like_any_other 6 hours ago [-]
It's getting so very old - all I want out of a process is code autocomplete, but I have to grant it read & write permission to my entire disk and network. When do we get good permissions and sandboxing and isolation? This can't go on.
nextos 5 hours ago [-]
I agree granting processes permission to read any file is unsustainable.

In Linux, sandboxing with Firejail or bwrap is quite easy to configure and allows fine-grained permissions.

Also, the new Landlock LSM and LSM-eBPF are quite promising.

boxedemp 5 hours ago [-]
I build my own. Maybe I nee to externalize it...
spectrumx 53 minutes ago [-]
[dead]
phplovesong 4 hours ago [-]
I tried AI, and i never could/dared to actually push anything to prod. The code seems ok, but i always have a gut feeling somethings off.

I guess the most valuable thing you loose is the "what" and "how". You cant learn these things from just reading code, because the mental model just is not there.

Also i dislike code reviews, it feels "like a waste of time" because sure i can spot some things, but i never can give the real feedback, because the mental model is not there (i did not write this code).

Having said that, I still use AI for my own code review, AI can spot some one-offs, typos, or even a few edge cases i missed.

But i still write my own code, i want to own it, i want to have an intimate knowledge of it and really understand the whys and whats.

andsoitis 3 hours ago [-]
Do you not work with others in a code base?
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 09:13:07 GMT+0000 (Coordinated Universal Time) with Vercel.