Subscribe to DSC Newsletter

Alternating between two languages, e.g. English/French, as in

One, deux, three, quatre, five, six, seven, huit, nine, dix.

Try. Why is it far more difficult than counting backward and skipping all even numbers?  I tried and found this exercise to be very difficult, even though I'm native speaker both in French and English. For me, it's easier to enumerate square numbers backward, or even prime numbers.

Is it involving two very different, non connected parts of the brain, say A and B, with constant swapping between A and B? It feels like it does. Or is it because of a broken path in my brain? Is everyone having the same problem?

Now an interesting questions: how do you detect algorithms that are supposed to run fast on a computer, but are significantly slowed down by constant swapping between two different parts of RAM (random access memory)? How to detect this and fix it? What is the potential gain, in terms of computer time usage?

Views: 1891

Reply to This

Replies to This Discussion

You are constantly switching back and forth between the text processor (A), and the numeric processor (B). And 95% of the "brain" time is spent in disconnecting from A, connecting to B, disconnecting from B, and connecting to A. Just a wild guess. But the same could occur in some computer algorithms. 

On a different note, how many algorithms require such constant, artificial swapping? It occurs very rarely in my opinion, that's why the brain is not well suited for this type of counting. Likewise, there's no doubt the same occurs with computers, but again, you should not blame the computer architecture, but rather the fact that such instances of swapping are very rare and should be handled by re-designing the algorithm. Now if there is a special class of processes requiring this type of swapping all the time, it might be worth designing a specific processor or type of memory, just like you have specialized graphic processors on any computer (and in your brain too).

True bilingual people usually can very easily swap between (say) French and English within a same sentence, and it increases communication speed without decreasing quality (if the listener is also fully bilingual). It indeed happens naturally all the time.

Yet, this enumerating exercise is very hard to perform. Not sure why.

A fascinating comparison. Living in San Diego, I see swapping all the time.

  My guess is that counting sequences are learned "by rote," to the point that they are almost hard wired, and essentially are handled by a different part of the brain than the one that generates meaningful sentences. Here's a quick but incomplete test of this theory. Try using numbers in a conversation that is not based on their quantity or their sequence. (For example, "there were 12 people in my class, but 400 in the first company I worked for, and ....")  You can even write down the arabic numerals first. Now try to discuss this bilingually. 

  

I did a bit of practice. The first 15 numbers are really hard. Then it seems to be getting easier and easier, probably because a pattern is being established, when you reach 30. Then it becomes routine.

Also, as the numbers grow bigger in the sequence, it takes so long to say them (0.3 second rather than 0.1 second for the first 10 numbers) that it looks like you can switch back and forth easily. It takes 0.3 second for the language swap (0.6 second at the beginning, 0.2 second after a bit of exercise), so if it takes 0.3 second or more to just say the word, then there is no "slow down" as both operations (saying the word, language swap) can be done simultaneously.

This problem is really one of competing signals. Either English or French is your "native" language and you likely have a large number of unidirectional rote connections for prior established patterns through this language subspace. Your second language signals are similarly unidirectionally connected such that you can recall a simple sequence or two. The problem is that these memories are selected for utterance by the amplitude of their "select me" signals in a winner take all fashion and switching contexts (in this case languages) is an imperfect process so the rote unidirectional sequence recall signals create a spurious and interfering competing signal for the composite selection task described.

I've done some reading on neuro-plasticity over the last few years and been dabbling in cognitive testing for assessment and selection. What appears to be happening is the neuro pathways responsible for language are separate enough that completely different areas of brain must be called to do both things. In this case it's numerical sequencing and verbal language ability. Most people are typically better at one or the other, but not both. Because the areas of the brain that do math/sequence and verbal expression are different, there is lag time between the cognitive processes as one process must be completed before the other can be used. Also to add support to Amy's comment, I had a German teacher in high school who claimed to have difficulty sticking to one language in conversation with locals when she would visit home (Germany). People of different language live in much closer proximity to one another so there is a much higher occurrence of polyglots in that region. That and I suspect their culture is more supportive of educating a multi-language environment. The really neat thing is, according to the newer theory of neuro-plasticity, the neurons/networks can be optimized or strengthened to decrease the lag between the two processes, but they will most likely remain distinctly separate cognitive processes for those of us above a certain age. That is unless there is some form of damage to the brain that impacts one or the other cognitive processes at which point the impacted process will have to be re-learned and may enlist a certain number of neurons and connections from the region of the brain that is responsible for the other processes.  In that case one may end up as somewhat of a synesthete.

With respect to the problem of fixing a computer... the problem is that most computers are serial processors. They're just really fast at processing one thing at a time. Because the way the brain is constructed, it can be said that it has many dedicated processors for a whole variety of inputs; therefore, the human brain is really good a processing millions of inputs at one time, it's just really slow. What you would need, as it seems to me, is a scalable machine capable of parallel processing and computational software designed to take advantage of parallel process capability on that machine. In other words, you need some really smart people and some funding, or a lot of time to figure out how to do it yourself. If you were successful, you could run computationally intense algorithms (discrete entity simulations, natural language process text miners, wider data sets, etc...) without choking your equipment. Depending on how many processors you are able to use, and how much RAM you could cram into it and still address, you could save a lot of time. How much? you'd have to calculate that with respect to the resources you can scratch together. That'll tell you whether it's worth your time and money or not.

How we deal with this to get speed is through parralelism. An algorithem that takes a long time can be broken into smaller pieces and do those smaller pieces in parrallel all at the same time. Also for these types of things more ram is importaint. The best way it to sort these values on both datasets and them merg them.  At least that is how we deal with them.

That's sort of what I was referring to. I'm looking a little further out than running two algorithms at the same time in the same environment. I've been looking into "PVM" or Parallel Virtual Machine. I have a bunch of computers that are lying fallow because they lack the power to do what I need them to do individually, but if I can get them to process small chunks, sort of like how the SETI project did with its screensaver, that would be a boon to my efforts. The only problem is getting something like that up and running is a fairly serious undertaking by itself. If successful it still doesn't solve the problem wrt the software running the algorithms may not work very well in an environment that wants to break its threads into smaller and smaller pieces to be computed elsewhere and then sent back. The problem with running the same software in a "parallel" runtime environment is it's still using the same resources and cpu time even with hyper-threading. With PVM, or something similar, one can theoretically commandeer physical resources, process time, priority, etc.... from an array of computers on a network. The biggest detractor is the cost of setup in terms of time, expertise, and probably money (to pay for the expertise and downtime or development time).

If the problem is something short term or non-essential to business, I definitely agree that pre-computing and merging is probably  the best way to go in the short run in lieu of creating a virtual desktop 'supercomputer'. 

An example, in the context of computers, would be an hash table with either values or keys that are short 95% of the time, with the 5% being extremely large entries. Computers will process this type of structure very inefficiently, though it is entirely stored in RAM. The fix is by having an hash table for the 95% normal entries, and another data structure for the remaining 5%, and two processes running in parallel: one for the 95%, one for the 5%. Sometimes, such badly designed hash tables are the result of poor programming / data processing skills, by the human being who created the data structure / key in question. Or the result of a shift in the type of streaming data processed by the algorithm. 

RSS

On Data Science Central

© 2019   AnalyticBridge.com is a subsidiary and dedicated channel of Data Science Central LLC   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service