Как установить numpy в visual studio code
Перейти к содержимому

Как установить numpy в visual studio code

VSCode fails to import numpy with ImportError: DLL load failed: The specified module could not be found. #14770

Im using vs code, but no conda. and i read some existing issue that same with me but dint work for me.

I have tried reinstall using both pip install numpy and also the package from https://www.lfd.uci.edu/

gohlke/pythonlibs/
but both give me the same error.

numpy: (1.17.3+mkl)
scipy: (1.3.1)

The ouput from my console:

import numpy
Traceback (most recent call last):
File «C:\Users\Feng\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\numpy\core_init_.py», line 17, in
from . import multiarray
File «C:\Users\Feng\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\numpy\core\multiarray.py», line 14, in
from . import overrides
File «C:\Users\Feng\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\numpy\core\overrides.py», line 7, in
from numpy.core._multiarray_umath import (
ImportError: DLL load failed: The specified module could not be found.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File «», line 1, in
File «C:\Users\Feng\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\numpy_init_.py», line 142, in
from . import core
File «C:\Users\Feng\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\numpy\core_init_.py», line 47, in
raise ImportError(msg)
ImportError:

IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!

Importing the numpy c-extensions failed.

Try uninstalling and reinstalling numpy.

If you have already done that, then:

  1. Check that you expected to use Python3.7 from «C:\Users\Feng\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe»,
    and that you have no directories in your PATH or PYTHONPATH that can
    interfere with the Python and numpy version «1.17.3» you’re trying to use.
  2. If (1) looks fine, you can open a new issue at
    https://github.com/numpy/numpy/issues. Please include details on:
    • how you installed Python
    • how you installed numpy
    • your operating system
    • whether or not you have multiple versions of Python installed
    • if you built from source, your compiler versions and ideally a build log

If you’re working with a numpy git repository, try git clean -xdf
(removes all files not under version control) and rebuild numpy.

Note: this error has many possible causes, so please don’t comment on
an existing issue about this — open a new one instead.

Original error was: DLL load failed: The specified module could not be found.

Installing NumPy

The only prerequisite for installing NumPy is Python itself. If you don’t have Python yet and want the simplest way to get started, we recommend you use the Anaconda Distribution — it includes Python, NumPy, and many other commonly used packages for scientific computing and data science.

NumPy can be installed with conda , with pip , with a package manager on macOS and Linux, or from source. For more detailed instructions, consult our Python and NumPy installation guide below.

CONDA

If you use conda , you can install NumPy from the defaults or conda-forge channels:

PIP

If you use pip , you can install NumPy with:

Also when using pip, it’s good practice to use a virtual environment — see Reproducible Installs below for why, and this guide for details on using virtual environments.

Python and NumPy installation guide

Installing and managing packages in Python is complicated, there are a number of alternative solutions for most tasks. This guide tries to give the reader a sense of the best (or most popular) solutions, and give clear recommendations. It focuses on users of Python, NumPy, and the PyData (or numerical computing) stack on common operating systems and hardware.

Recommendations

We’ll start with recommendations based on the user’s experience level and operating system of interest. If you’re in between “beginning” and “advanced”, please go with “beginning” if you want to keep things simple, and with “advanced” if you want to work according to best practices that go a longer way in the future.

Beginning users

On all of Windows, macOS, and Linux:

  • Install Anaconda (it installs all packages you need and all other tools mentioned below).
  • For writing and executing code, use notebooks in JupyterLab for exploratory and interactive computing, and Spyder or Visual Studio Code for writing scripts and packages.
  • Use Anaconda Navigator to manage your packages and start JupyterLab, Spyder, or Visual Studio Code.

Advanced users

Conda
  • Install Miniforge.
  • Keep the base conda environment minimal, and use one or more conda environments to install the package you need for the task or project you’re working on.
Alternative if you prefer pip/PyPI

For users who know, from personal preference or reading about the main differences between conda and pip below, they prefer a pip/PyPI-based solution, we recommend:

  • Install Python from python.org, Homebrew, or your Linux package manager.
  • Use Poetry as the most well-maintained tool that provides a dependency resolver and environment management capabilities in a similar fashion as conda does.

Python package management

Managing packages is a challenging problem, and, as a result, there are lots of tools. For web and general purpose Python development there’s a whole host of tools complementary with pip. For high-performance computing (HPC), Spack is worth considering. For most NumPy users though, conda and pip are the two most popular tools.

Pip & conda

The two main tools that install Python packages are pip and conda . Their functionality partially overlaps (e.g. both can install numpy ), however, they can also work together. We’ll discuss the major differences between pip and conda here — this is important to understand if you want to manage packages effectively.

The first difference is that conda is cross-language and it can install Python, while pip is installed for a particular Python on your system and installs other packages to that same Python install only. This also means conda can install non-Python libraries and tools you may need (e.g. compilers, CUDA, HDF5), while pip can’t.

The second difference is that pip installs from the Python Packaging Index (PyPI), while conda installs from its own channels (typically “defaults” or “conda-forge”). PyPI is the largest collection of packages by far, however, all popular packages are available for conda as well.

The third difference is that conda is an integrated solution for managing packages, dependencies and environments, while with pip you may need another tool (there are many!) for dealing with environments or complex dependencies.

Reproducible installs

As libraries get updated, results from running your code can change, or your code can break completely. It’s important to be able to reconstruct the set of packages and versions you’re using. Best practice is to:

  1. use a different environment per project you’re working on,
  2. record package names and versions using your package installer; each has its own metadata format for this:
    • Conda: conda environments and environment.yml
    • Pip: virtual environments and requirements.txt
    • Poetry: virtual environments and pyproject.toml

NumPy packages & accelerated linear algebra libraries

NumPy doesn’t depend on any other Python packages, however, it does depend on an accelerated linear algebra library — typically Intel MKL or OpenBLAS. Users don’t have to worry about installing those (they’re automatically included in all NumPy install methods). Power users may still want to know the details, because the used BLAS can affect performance, behavior and size on disk:

The NumPy wheels on PyPI, which is what pip installs, are built with OpenBLAS. The OpenBLAS libraries are included in the wheel. This makes the wheel larger, and if a user installs (for example) SciPy as well, they will now have two copies of OpenBLAS on disk.

In the conda defaults channel, NumPy is built against Intel MKL. MKL is a separate package that will be installed in the users’ environment when they install NumPy.

In the conda-forge channel, NumPy is built against a dummy “BLAS” package. When a user installs NumPy from conda-forge, that BLAS package then gets installed together with the actual library — this defaults to OpenBLAS, but it can also be MKL (from the defaults channel), or even BLIS or reference BLAS.

The MKL package is a lot larger than OpenBLAS, it’s about 700 MB on disk while OpenBLAS is about 30 MB.

MKL is typically a little faster and more robust than OpenBLAS.

Besides install sizes, performance and robustness, there are two more things to consider:

  • Intel MKL is not open source. For normal use this is not a problem, but if a user needs to redistribute an application built with NumPy, this could be an issue.
  • Both MKL and OpenBLAS will use multi-threading for function calls like np.dot , with the number of threads being determined by both a build-time option and an environment variable. Often all CPU cores will be used. This is sometimes unexpected for users; NumPy itself doesn’t auto-parallelize any function calls. It typically yields better performance, but can also be harmful — for example when using another level of parallelization with Dask, scikit-learn or multiprocessing.

Troubleshooting

If your installation fails with the message below, see Troubleshooting ImportError.

Data Science in VS Code tutorial

This tutorial demonstrates using Visual Studio Code and the Microsoft Python extension with common data science libraries to explore a basic data science scenario. Specifically, using passenger data from the Titanic, you will learn how to set up a data science environment, import and clean data, create a machine learning model for predicting survival on the Titanic, and evaluate the accuracy of the generated model.

Prerequisites

The following installations are required for the completion of this tutorial. Make sure to install them if you haven’t already.

The Python extension for VS Code and Jupyter extension for VS Code from the Visual Studio Marketplace. By default, the Python extension installs the Jupyter extension for you. For more details on installing extensions, see Extension Marketplace. Both extensions are published by Microsoft.

Note: If you already have the full Anaconda distribution installed, you don’t need to install Miniconda. Alternatively, if you’d prefer not to use Anaconda or Miniconda, you can create a Python virtual environment and install the packages needed for the tutorial using pip. If you go this route, you will need to install the following packages: pandas, jupyter, seaborn, scikit-learn, keras, and tensorflow.

Set up a data science environment

Visual Studio Code and the Python extension provide a great editor for data science scenarios. With native support for Jupyter notebooks combined with Anaconda, it’s easy to get started. In this section, you will create a workspace for the tutorial, create an Anaconda environment with the data science modules needed for the tutorial, and create a Jupyter notebook that you’ll use for creating a machine learning model.

Begin by creating an Anaconda environment for the data science tutorial. Open an Anaconda command prompt and run conda create -n myenv python=3.9 pandas jupyter seaborn scikit-learn keras tensorflow to create an environment named myenv. For additional information about creating and managing Anaconda environments, see the Anaconda documentation.

Next, create a folder in a convenient location to serve as your VS Code workspace for the tutorial, name it hello_ds .

Open the project folder in VS Code by running VS Code and using the File > Open Folder command. You can safely trust opening the folder, since you created it.

Once VS Code launches, create the Jupyter notebook that will be used for the tutorial. Open the Command Palette ( ⇧⌘P (Windows, Linux Ctrl+Shift+P ) ) and select Jupyter: Create New Jupyter Notebook.

Note: Alternatively, from the VS Code File Explorer, you can use the New File icon to create a Notebook file named hello.ipynb .

Save the file as hello.ipynb using File > Save As. .

After your file is created, you should see the open Jupyter notebook in the notebook editor. For additional information about native Jupyter notebook support, you can read the Jupyter Notebooks topic.

Viewing a new Jupyter Notebook

Now select Select Kernel at the top right of the notebook.

Choose the Python environment you created above in which to run your kernel.

Prepare the data

This tutorial uses the Titanic dataset available on OpenML.org, which is obtained from Vanderbilt University’s Department of Biostatistics at https://hbiostat.org/data. The Titanic data provides information about the survival of passengers on the Titanic and characteristics about the passengers such as age and ticket class. Using this data, the tutorial will establish a model for predicting whether a given passenger would have survived the sinking of the Titanic. This section shows how to load and manipulate data in your Jupyter notebook.

To begin, download the Titanic data from hbiostat.org as a CSV file (download links in the upper right) named titanic3.csv and save it to the hello_ds folder that you created in the previous section.

If you haven’t already opened the file in VS Code, open the hello_ds folder and the Jupyter notebook ( hello.ipynb ), by going to File > Open Folder.

Within your Jupyter notebook, begin by importing the pandas and numpy libraries, two common libraries used for manipulating data, and loading the Titanic data into a pandas DataFrame. To do so, copy the code below into the first cell of the notebook. For more guidance about working with Jupyter notebooks in VS Code, see the Working with Jupyter Notebooks documentation.

Now, run the cell using the Run cell icon or the Shift+Enter shortcut.

Running a Jupyter notebook cell

After the cell finishes running, you can view the data that was loaded using the Variables Explorer and Data Viewer. First select the Variables icon in the notebook’s upper toolbar.

Select Variables icon

A JUPYTER: VARIABLES pane will open at the bottom of VS Code. It contains a list of the variables defined so far in your running kernel.

To view the data in the Pandas DataFrame previously loaded, select the Data Viewer icon to the left of the data variable.

Use the Data Viewer to view, sort, and filter the rows of data. After reviewing the data, it can then be helpful to graph some aspects of it to help visualize the relationships between the different variables.

Before the data can be graphed, you need to make sure that there aren’t any issues with it. If you look at the Titanic csv file, one thing you’ll notice is that a question mark ("?") was used to identify cells where data wasn’t available.

While Pandas can read this value into a DataFrame, the result for a column like age is that its data type will be set to object instead of a numeric data type, which is problematic for graphing.

This problem can be corrected by replacing the question mark with a missing value that pandas is able to understand. Add the following code to the next cell in your notebook to replace the question marks in the age and fare columns with the numpy NaN value. Notice that we also need to update the column’s data type after replacing the values.

Tip: To add a new cell you can use the insert cell icon that’s in the bottom left corner of an existing cell. Alternatively, you can also use the Esc to enter command mode, followed by the B key.

Note: If you ever need to see the data type that has been used for a column, you can use the DataFrame dtypes attribute.

Now that the data is in good shape, you can use seaborn and matplotlib to view how certain columns of the dataset relate to survivability. Add the following code to the next cell in your notebook and run it to see the generated plots.

Graphing the titanic data

To better view details on the graphs, you can open them in the plot viewer by hovering over the upper right corner of the graph and clicking the button that appears.

These graphs are helpful in seeing some of the relationships between survival and the input variables of the data, but it’s also possible to use pandas to calculate correlations. To do so, all the variables used need to be numeric for the correlation calculation and currently gender is stored as a string. To convert those string values to integers, add and run the following code.

Now, you can analyze the correlation between all the input variables to identify the features that would be the best inputs to a machine learning model. The closer a value is to 1, the higher the correlation between the value and the result. Use the following code to correlate the relationship between all variables and survival.

Determining the correlation between input variables and survival

Looking at the correlation results, you’ll notice that some variables like gender have a fairly high correlation to survival, while others like relatives (sibsp = siblings or spouse, parch = parents or children) seem to have little correlation.

Let’s hypothesize that sibsp and parch are related in how they affect survivability, and group them into a new column called "relatives" to see whether the combination of them has a higher correlation to survivability. To do this, you will check if for a given passenger, the number of sibsp and parch is greater than 0 and, if so, you can then say that they had a relative on board.

Use the following code to create a new variable and column in the dataset called relatives and check the correlation again.

You’ll notice that in fact when looked at from the standpoint of whether a person had relatives, versus how many relatives, there is a higher correlation with survival. With this information in hand, you can now drop from the dataset the low value sibsp and parch columns, as well as any rows that had NaN values, to end up with a dataset that can be used for training a model.

Note: Although age had a low direct correlation, it was kept because it seems reasonable that it might still have correlation in conjunction with other inputs.

Train and evaluate a model

With the dataset ready, you can now begin creating a model. For this section, you’ll use the scikit-learn library (as it offers some useful helper functions) to do pre-processing of the dataset, train a classification model to determine survivability on the Titanic, and then use that model with test data to determine its accuracy.

A common first step to training a model is to divide up the dataset into training and validation data. This allows you to use a portion of the data to train the model and a portion of the data to test the model. If you used all your data to train the model, you wouldn’t have a way to estimate how well it would actually perform against data the model hasn’t yet seen. A benefit of the scikit-learn library is that it provides a method specifically for splitting a dataset into training and test data.

Add and run a cell with the following code to the notebook to split up the data.

Next, you’ll normalize the inputs such that all features are treated equally. For example, within the dataset the values for age range from

0-100, while gender is only a 1 or 0. By normalizing all the variables, you can ensure that the ranges of values are all the same. Use the following code in a new code cell to scale the input values.

There are many different machine learning algorithms that you could choose from to model the data. The scikit-learn library also provides support for many of them and a chart to help select the one that’s right for your scenario. For now, use the Naïve Bayes algorithm, a common algorithm for classification problems. Add a cell with the following code to create and train the algorithm.

With a trained model, you can now try it against the test data set that was held back from training. Add and run the following code to predict the outcome of the test data and calculate the accuracy of the model.

Running the trained model against test data

Looking at the result of the test data, you’ll see that the trained algorithm had a

75% success rate at estimating survival.

(Optional) Use a neural network

A neural network is a model that uses weights and activation functions, modeling aspects of human neurons, to determine an outcome based on provided inputs. Unlike the machine learning algorithm you looked at previously, neural networks are a form of deep learning wherein you don’t need to know an ideal algorithm for your problem set ahead of time. It can be used for many different scenarios and classification is one of them. For this section, you’ll use the Keras library with TensorFlow to construct the neural network, and explore how it handles the Titanic dataset.

The first step is to import the required libraries and to create the model. In this case, you’ll use a Sequential neural network, which is a layered neural network wherein there are multiple layers that feed into each other in sequence.

After defining the model, the next step is to add the layers of the neural network. For now, let’s keep things simple and just use three layers. Add the following code to create the layers of the neural network.

  • The first layer will be set to have a dimension of 5, since you have five inputs: sex, pclass, age, relatives, and fare.
  • The last layer must output 1, since you want a 1-dimensional output indicating whether a passenger would survive.
  • The middle layer was kept at 5 for simplicity, although that value could have been different.

The rectified linear unit (relu) activation function is used as a good general activation function for the first two layers, while the sigmoid activation function is required for the final layer as the output you want (of whether a passenger survives or not) needs to be scaled in the range of 0-1 (the probability of a passenger surviving).

You can also look at the summary of the model you built with this line of code:

Once the model is created, it needs to be compiled. As part of this, you need to define what type of optimizer will be used, how loss will be calculated, and what metric should be optimized for. Add the following code to build and train the model. You’ll notice that after training, the accuracy is

Note: This step may take anywhere from a few seconds to a few minutes to run depending on your machine.

Build and train the neural network

Now that the model is built and trained, we can see how it works against the test data.

Similar to the training, you’ll notice that you now have 61% accuracy in predicting survival of passengers. In this case, the result is not better than the 75% accuracy from the Naive Bayes Classifier tried previously.

Next steps

Now that you’re familiar with the basics of performing machine learning within Visual Studio Code, here are some other Microsoft resources and tutorials to check out.

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *