sentiment analysis-LSTM and word2vec models on tensorflow

I am working now on a project including sentiment analysis on sentences. I got the help of this tutorial : this model, I am using CSV file containing tweets sentences which are labeled as positive or negative.I have few questions there a difference if I am using a word2vec algorithm like the 'skip-gram' model, and then fed the embedding layer to this network? or it is the same as initialized a random matrix and let the network to learn the words during training by itself? can I...Read more

Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2

I am new to TensorFlow. I have recently installed it (Windows CPU version) and received the following message: Successfully installed tensorflow-1.4.0 tensorflow-tensorboard-0.4.0rc2Then when I tried to runimport tensorflow as tfhello = tf.constant('Hello, TensorFlow!')sess = tf.Session()'Hello, TensorFlow!'a = tf.constant(10)b = tf.constant(32) + b)42sess.close()(which I found through received the following message: 2017-11-02 01:56:21.698935: I C:\tf_jenkins\home\workspace\...Read more

tensorflow - The code that works for LinearRegressor returns AttributeError: 'Tensor' object has no attribute 'get' for DynamicRnnEstimator

In the beginning, I need to say that I am using TF v 1.1.The code:import randomimport tensorflow as tfxData = []yData = []for _ in range(10000): x = random.random() xData.append(x) y = 2 * x yData.append(y)xc = tf.contrib.layers.real_valued_column("")estimator = tf.contrib.learn.DynamicRnnEstimator(problem_type = constants.ProblemType.LINEAR_REGRESSION, prediction_type = PredictionType.SINGLE_VALUE, sequence_feature_columns = [xc], ...Read more

How to reuse slim.arg_scope in tensorFlow?

I'm trying to load inception_resnet_v2_2016_08_30.ckpt file and do testing.The code works well with single image (entering oneFile() function only once).If I call oneFile() function twice, the following error occur: ValueError: Variable InceptionResnetV2/Conv2d_1a_3x3/weights already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:I found related solution on Sharing VariablesIf tf.variable_scope meet the same problem, could call scope.reuse_variables() to resolve this problem.But I can't find the more

tensorflow - tensor forest estimator value error at fitting the training part

Code : from sklearn import cross_validation as cv import numpy as np from tensorflow.contrib.learn.python.learn.estimators import estimator from tensorflow.contrib.tensor_forest.python import tensor_forestX = np.array([[ 74., 166., 331., 161., 159., 181., 180.], [ 437., 427., 371., 361., 393., 465., 464.], [ 546., 564., 588., 595., 536., 537., 520.], [ 89., 89., 87., 87., 108., 113., 111.], [ 75., 90., 74., 89., 130., 140., 135.]])Y = np.array([[ 51., 43., 29., 43., 43., 41., 42.]...Read more

TensorFlow error: logits and labels must be same size

I've been trying to learn TensorFlow by implementing ApproximatelyAlexNet based on various examples on the internet. Basically extending the AlexNet example here to take 224x224 RGB images (rather than 28x28 grayscale images), and adding a couple more layers, changing kernel sizes, strides, etc, per other AlexNet implementations I've found online.Have worked through a number of mismatched shape type errors, but this one has me stumped:tensorflow.python.framework.errors.InvalidArgumentError: logits and labels must be same size: logits_size=dim {...Read more

how to use MNIST datast on linux using tensorflow

I'm new in machine learning and I am following tensorflow's tutorial to create some simple Neural Networks which learn the MNIST data.i wanna run a code that do the recognition hand writing digits using the MNIST data but i don't know how to run it ... should i dowload the data on my machine and extracted and put it on a file and then set the path on the code or did tensorflow contain the data ...but when i do import input_data i get No module named 'input_data' also when i do from tensorflow.examples.tutorials.mnist import input_data ==> No m...Read more

tensorflow - Keras ImageDataGenerator Preprocessing

As an example, consider fine-tuning a Resnet50 model in Keras. For example here: from keras.applications.resnet50 import ResNet50from keras.preprocessing import imagefrom keras.preprocessing.image import ImageDataGeneratorfrom keras.applications.resnet50 import preprocess_input, decode_predictionsimport numpy as npmodel = ResNet50(weights='imagenet')train_datagen = ImageDataGenerator()train_generator = train_datagen.flow_from_directory( "./data/train", target_size=(299, 299), batch_size=50, class_mode='binary')model.fit_generator(t...Read more

tensorflow - how to convert string Tensor to Tensor float list?

So, my code is like parsed_line = tf.decode_csv(line, [[0], [0], [""]]) print(parsed_line[0]) del parsed_line[0] del parsed_line[0] features = parsed_line print(parsed_line[0])then the result is [<tf.Tensor 'DecodeCSV:0' shape=() dtype=int32>, <tf.Tensor 'DecodeCSV:1' shape=() dtype=int32>, <tf.Tensor 'DecodeCSV:2' shape=() dtype=string>]and [<Tensor("DecodeCSV:2", shape=(), dtype=string)>]the csv i will give to this decode function is 1, 0, 0101010010101010101010and I want this "01010100101010...Read more

tensorflow - Transfer learning from the pre-trained NASnet network. How to know the number of layers to freeze?

In order to train a model for image classification (using Keras or Tensorflow) I want to retrain a certain number of layers of the NASNetMobile, using my own dataset of images. In this paper: (section A.7) we can read: "Additionally, all models use an auxiliary classifier located at 2/3 of the way up the network". Here the layers of the NasNetMobile that I want to do the transfer learning from:, based on the previous, should I freeze the ...Read more

Update only part of the word embedding matrix in Tensorflow

Assuming that I want to update a pre-trained word-embedding matrix during training, is there a way to update only a subset of the word embedding matrix?I have looked into the Tensorflow API page and found this:# Create an optimizer.opt = GradientDescentOptimizer(learning_rate=0.1)# Compute the gradients for a list of variables.grads_and_vars = opt.compute_gradients(loss, <list of variables>)# grads_and_vars is a list of tuples (gradient, variable). Do whatever you# need to the 'gradient' part, for example cap them, etc.capped_grads_and_v...Read more