This presents an inconclusive attempt to create a proof-of-concept that fMRI data from human brains can help improve moral reasoning in large language models.
Offensive technologies will likely become more advanced while the defense ones will lag behind. AI hacking capabilities might be the second most important problem after AI alignment.
This text proposes a modification to the third condition of the deceptive alignment (Hubinger et al. 2019).
This is summary and critique on two publications about AI Safety: Pragmatic AI Safety and Risks from Learned Optimization. They are an introductory and are aiming to direct research.
This summer I took part in ML Safety Scholars with online courses from MIT and the University of Michigan as well as a newly designed Introduction to ML Safety. The program introduces students to the fundamentals of deep learning and ML safety. This program run by the Center for AI Safety was designed by Dan Hendrycks. The final project was yet another MNIST classifier but this time with as many safe features as possible. It lasted for about 10 weeks which I mixed with my relocation to Turkey therefore it took me some effort to finish it in time.
What I might have done another way to better finish this program is to deepen my knowledge in mathematics (probability theory, information theory, multivariable calculus) because to understand such things as entropy, various probability distributions, to implement backpropagation, etc. we need a ready-to-use knowledge and while in the middle of battle to finish some task when time runs out it is harder. So it seems it is better to use spaced repetition rather than massed practice (see the Make it stick book by Brown et al.).
This program is like a bootcamp for ML safety field. It won’t teach you how to make a state-of-the-art models but will introduce to the latest concepts in ML safety. The first part on ML wasn’t new for me because I finished the ML intro course before and also did the FastAI course, i.e. first 2 weeks. But then the DL for CV and ML Safety parts were completely new for me. The hardest part perhaps is the last one as those concepts are built on the previous and targets specific areas. The final project was fun to make because it was reiterating most of the material we learnt but with the aim to synthesis, to connect all these methods and techniques together.
The picture in the header shows the calibration of one of the models we could train.
Here is a way to organize your tasks, your personal info, your reference library, your values, your life. This aims to be a simple yet powerful system that one can master or extend without limit. Yes, that puts high demand on knowledge and skill, on a user. It is primarily for powerful users, even for developers. In essence, we base this on plan text, version control system, powerful text editor and shell. All this tools even when mastered doesn’t immediately give a clean system, one need to have an understanding on how to prioritize, how to manage and how to execute because this starts as a blank sheet, it is open for any kind of modifications.
I published a new sample Web API project that automates back-end side for a biking application. It implements OAuth 2 security for authentication on a separate server (IdentityServer4) and has many other good features found in many today systems. This project also shows how we can easily automate development from CLI only using PowerShell, Python. It was done in Vim and command line environment. This became possible because Microsoft did great work porting its tools into other platforms (OmniSharp-Vim, dotnet cli).
Please see the sample project at Github: https://github.com/artkpv/Sample-ASP-Core-API-with-OAuth
Давайте посмотрим как устроено конкурентное и параллельное программирование в .Net, на примере проблемы обедающих философов. План такой, от синхронизации потоков/процессов, до модели акторов (в следующих частях). Статья может быть полезна для первого знакомства или для того, чтобы освежить свои знания.
Зачем вообще уметь это? Транзисторы достигают своего минимального размера, закон Мура упирается в ограничение скорости света и поэтому рост наблюдается в количестве, транзисторов можно делать больше. При этом количество данных растет, а пользователи ожидают немедленной реакции систем. В такой ситуации “обычное” программирование, когда у нас один выполняющий поток, уже не эффективно. Нужно как-то решать проблему одновременного или конкурентного выполнения. Причем, проблема эта существует на разных уровнях: на уровне потоков, на уровне процессов, на уровне машин в сети (распределенные системы). В .NET есть качественные, проверенные временем, технологии для быстрого и эффективного решения таких задач.
Consider one book that would try to compile all best practices for all past years for software construction. With about 450 items in bibliography of most important publications at the time (second edition published in 2004 y.) this book endeavours to bring development process at a totally new level. Essentially this book interprets most important those publications and gives guidelines for building software. It takes recommendations of gurus in the field and scientists and presents practical rules for effectively building quality software thus narrowing the gap between production and research. Hence the title of the book which implies that it is some complete set of laws.
The book is divided into seven parts. Giving some base terms and ideas in part I, author goes from the very bottom of construction process, the code itself (parts II, III, IV) to upper processes. Part V is about working with existing code. Controlling system development is in part VI. And the last part VII is about requirements to creator itself. For quick use the book includes checklists for all stages of construction and many figures and tables as well as key terms. Thus can be used as a quick reference.
Below are my notes from the book (unfinished).
That great book written by D. Esposito & A. Saltarello collects recipes for making solutions for enterprise applications. Though it is not a usual cookbook but guiding principals for making a ‘dish’. It is based on author’s experience and common practices from other books. Second edition published in Jan 2015 seems to be a good summation of best practices with a brief history of them for that time in .NET world.
Authors begins by giving answers to such questions as what is architecture of applications? who are software architects? how modern apps developing differs from past? We can divide the book into two parts (though authors divide it into 4 parts). In the first one there are answers to these questions: what is needed for success work? what’s required from a team? what is code quality and how to achieve it? and other. In the second one, practical recommendations for developers: what types of architecture they see? what are parts of those? main advantages and disadvantages.
To my mind to understand the book a practical experience in software developing is required, it is not for beginners.
Below are notes and mind maps of the book.