Top new questions this week:
|
I asked ChatGPT (3.5 and 4) about current date and (s)he answered correctly. In subsequent conversation (s)he was not able to explain how (s)he has this knowledge. I always thought that the model only …
|
I have no access to GPT-4, but I wonder whether it can do the following (where ChatGPT failed). Make syntactic and morphological analysis of sentences in a language like Russian, marking cases, parts …
|
I do not know at all how AI works. After checking out the first open AI system available to the public, ChatGPT, I am curious whether systems like this could contribute to scientific theory in the …
|
ChatGPT can explain given code snippet we also ask question like “What does this variable do” , “Why this is used” and all. I gave C++ function snippet from an popular Open Source …
|
I am struggling to understand what makes a scheme on-policy or off-policy. From what I have read, we can say that deep Q-learning is off-policy because we use a different policy like $\epsilon$-greedy …
|
I am attempting to program a Denoising Diffusion Model based on the one introduced in the article by Ho et al. (2020). However, I have run into issues while testing the reverse diffusion process. …
|
To me it looks like it is based on GPT-3. On the other hand, there were rumors that training of GPT-3 was done with errors, but re-train was impossible due to the costs.
|
Greatest hits from previous weeks:
|
In hill climbing methods, at each step, the current solution is replaced with the best neighbour (that is, the neighbour with highest/smallest value). In simulated annealing, “downhills” moves are …
|
I am not looking for an efficient way to find primes (which of course is a solved problem). This is more of a “what if” question. So, in theory, could you train a neural network to predict …
|
As a human being, we can think infinity. In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real). For example, at least, …
|
The Wikipedia article for the universal approximation theorem cites a version of the universal approximation theorem for Lebesgue-measurable functions from this conference paper. However, the paper …
|
As far as I can tell, neural networks have a fixed number of neurons in the input layer. If neural networks are used in a context like NLP, sentences or blocks of text of varying sizes are fed to a …
|
Batch size is a term used in machine learning and refers to the number of training examples utilised in one iteration. The batch size can be one of three options: batch mode: where the batch size is …
|
It is said that activation functions in neural networks help introduce non-linearity. What does this mean? What does non-linearity mean in this context? How does the introduction of this non-…
|
Can you answer these questions?
|
In the realm of game-playing, such as in the cases of AlphaGo and Deep Blue, can humans ultimately surpass AI in skill? Despite the current dominance of machine learning, what factors may contribute …
|
I’m hoping some of you can shed light on a few hypothetical questions about AGI. My understanding is that AGI refers to an agent that can reason, strategize, solve problems, make judgments under …
|
I would like to combine GANs and NLP to create a system that can take an input and generate an appropriate output. For example, …
|