[My tentative answer, without firsthand experience, don't take it as definitive truth]
It is at least partly intentional, since it can be technically possible to run pip from within python (but a bad idea). I would also go with a mostly historical aspect.
Installing a package is not trivial, it typically means adding files in various places on the computer, sometimes compiling code, and in many languages it requires administrator permissions. In the 1990s (when R and Python were developed), most of the popular programming languages were compiled, so it made little sense to try and install libraries from within the language.
With that in mind, Python was first developed by a programmer, for programmers, so it makes sense it followed the practices common in programming. R was developed by statisticians, for statisticians, so it did go the extra mile to be user-friendly to non-programmers.
From the point of view of a programmer, it makes sense to separate administrative tasks such as installing external libraries (which modifies the state of the computer) from purely programming aspects. In R too, it is considered bad practice to include install.packages() inside an analysis script. So that possibility in R is really an additional convenience.
Finally, Internet connectivity really grew in the early 1990s making the systematic download of package more obvious (e.g. dpkg and rpm for Linux distributions started in 1993-1994, the CTAN 1992 and CPAN 1993), and Python development started in late 1989, earlier than R (1997, although R was based on S which is much older). So that might also have changed how much thought the authors put into automated download and installation in the early design of the language.