Wearable digital technologies such as Augmented Reality glasses offer a unique platform not only for monitoring proxies of individual human behaviour data (e.g. eye and body limb movements, posture, location, skin conductivity) but also for affecting behaviour, as instances of persuasive technologies often used to achieve personal human goals, e.g. for integrating physical exercise into everyday life. For artificial computational systems to gracefully affect in-situ human behavior is however associated with several challenges. It requires carefully interfacing digital processes running on the wearable device(s) with biological processes taking place inside the human body (e.g. cognitive, perceptual, motoric). It also needs to work within the constraints of engineering viability, user experience, and ethical constraints. We present our initial attempts to synchronize human biological visual attention processes with eye tracking-based visual stimuli generation in two application domains: for counteracting racial discrimination in the assessment of job applications, and for facilitating assembly tasks. Characteristic for both approaches is that the persuasion is aimed at being completely unnoticeable, at least in the long-term. We then move on to initial ideas for a more general model for integrating perceptual and cognitive functions across the biological-digital border to optimize the system as a whole. More specifically, today’s AI architectures have a hard time achieving human-like high level cognition and perception which, we would argue, could potentially be addressed by a carefully designed symbiotic information exchange between existing human biological symbolic processes run inside the human brain with digital Machine Learning ones being tasked with the simpler sub-symbolic processing. Ethical concerns are, of course, also discussed including the potential reduction of “free will” and the consequences of system failure.