DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites ‘first image data’ after reciting ‘image data’ making it unclear if they are referring to the same element or not. For examination purposes ‘first image data’ will be treated as a subset of ‘image data’.
Claim 1 recites ‘nerve impulses’ multiple times in the claim making it unclear if each recitation refers to the same element or not. For examination purposes they will be treated as the same element.
The term “some” in claim 1 is a relative term which renders the claim indefinite. The term “some” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim 1 recites ‘second image data’ after reciting ‘image data’ making it unclear if they are referring to the same element or not. For examination purposes ‘second image data’ will be treated as a subset of ‘image data’.
Claim 1 recites the limitation "the signals" in Line 19. There is insufficient antecedent basis for this limitation in the claim.
Claim 2 recites the limitation ‘a set of image data’ and is dependent back to claim 1 which recites ‘image data’ making it unclear if the recitation in claim 2 is meant to refer to the recitation in claim 1 or not. For examination purposes they will be treated as the same element.
Claim 2 recites the limitation ‘a corresponding nerve impulse’ and is dependent back to claim 1 which recites ‘nerve impulses’ making it unclear if the recitation in claim 2 is meant to refer to the recitation in claim 1 or not. For examination purposes they will be treated as the same element.
The term “larger” in claim 3 is a relative term which renders the claim indefinite. The term “larger” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim 4 recites the limitation ‘image data’ and is dependent back to claim 1 which recites ‘image data’ making it unclear if the recitation in claim 4 is meant to refer to the recitation in claim 1 or not. For examination purposes they will be treated as the same element.
Claim 4 recites the limitation ‘nerve impulses’ and is dependent back to claim 1 which recites ‘nerve impulses’ making it unclear if the recitation in claim 4 is meant to refer to the recitation in claim 1 or not. For examination purposes they will be treated as the same element.
Claim 6 recites the limitation ‘image data’ and is dependent back to claim 1 which recites ‘image data’ making it unclear if the recitation in claim 6 is meant to refer to the recitation in claim 1 or not. For examination purposes they will be treated as the same element.
Claim 6 recites the limitation ‘nerve impulses’ and is dependent back to claim 1 which recites ‘nerve impulses’ making it unclear if the recitation in claim 6 is meant to refer to the recitation in claim 1 or not. For examination purposes they will be treated as the same element.
Claim 6 recites the limitation "the form" in Line 2. There is insufficient antecedent basis for this limitation in the claim.
The term “suitably” in claim 9 is a relative term which renders the claim indefinite. The term “suitably” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim 9 recites ‘first image data’ after reciting ‘image data’ making it unclear if they are referring to the same element or not. For examination purposes ‘first image data’ will be treated as a subset of ‘image data’.
Claim 9 recites ‘nerve impulses’ multiple times in the claim making it unclear if each recitation refers to the same element or not. For examination purposes they will be treated as the same element.
The term “some” in claim 9 is a relative term which renders the claim indefinite. The term “some” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim 9 recites ‘second image data’ after reciting ‘image data’ making it unclear if they are referring to the same element or not. For examination purposes ‘second image data’ will be treated as a subset of ‘image data’.
Claim 9 recites the limitation "the signals" in Line 22. There is insufficient antecedent basis for this limitation in the claim.
Claim 10 recites the limitation ‘a set of image data’ and is dependent back to claim 9 which recites ‘image data’ making it unclear if the recitation in claim 10 is meant to refer to the recitation in claim 9 or not. For examination purposes they will be treated as the same element.
Claim 10 recites the limitation ‘a corresponding nerve impulse’ and is dependent back to claim 9 which recites ‘nerve impulses’ making it unclear if the recitation in claim 10 is meant to refer to the recitation in claim 9 or not. For examination purposes they will be treated as the same element.
The term “larger” in claim 11 is a relative term which renders the claim indefinite. The term “larger” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim 12 recites the limitation ‘image data’ and is dependent back to claim 9 which recites ‘image data’ making it unclear if the recitation in claim 12 is meant to refer to the recitation in claim 9 or not. For examination purposes they will be treated as the same element.
Claim 12 recites the limitation ‘nerve impulses’ and is dependent back to claim 9 which recites ‘nerve impulses’ making it unclear if the recitation in claim 12 is meant to refer to the recitation in claim 9 or not. For examination purposes they will be treated as the same element.
Claim 14 recites the limitation ‘image data’ and is dependent back to claim 9 which recites ‘image data’ making it unclear if the recitation in claim 14 is meant to refer to the recitation in claim 9 or not. For examination purposes they will be treated as the same element.
Claim 14 recites the limitation ‘nerve impulses’ and is dependent back to claim 9 which recites ‘nerve impulses’ making it unclear if the recitation in claim 14 is meant to refer to the recitation in claim 9 or not. For examination purposes they will be treated as the same element.
Claim 14 recites the limitation "the form" in Line 3. There is insufficient antecedent basis for this limitation in the claim.
Claim 17 recites ‘image data’ multiple times in the claim making it unclear if each recitation refers to the same element or not. For examination purposes they will be treated as the same element.
Claim 17 recites ‘nerve impulses’ multiple times in the claim making it unclear if each recitation refers to the same element or not. For examination purposes they will be treated as the same element.
Claim 17 recites the limitation "the image" in Line 8 and ‘the images’ in Lines 23-24. There is insufficient antecedent basis for this limitation in the claim.
Claim 17 recites ‘a first image’ multiple times in the claim making it unclear if each recitation refers to the same element or not. For examination purposes they will be treated as the same element.
Claim 17 recites ‘a set of image data’ after reciting ‘image data’ making it unclear if each recitation refers to the same element or not. For examination purposes they will be treated as the same element.
Claim 17 recites ‘a corresponding nerve impulse’ after reciting ‘nerve impulses’ making it unclear if each recitation refers to the same element or not. For examination purposes they will be treated as the same element.
Claim 17 recites ‘at least one of the images in the series of images’, wherein it is unclear what the scope of the claim is given that ‘at least one’ can be a number greater than ‘the series of images’. For examination purposes ‘at least one’ will be given the upper limit of whatever number the series of images comprises.
Claim 18 recites the limitation ‘image data’ and is dependent back to claim 17 which recites ‘image data’ making it unclear if the recitation in claim 18 is meant to refer to the recitation in claim 17 or not. For examination purposes they will be treated as the same element.
Claim 18 recites the limitation ‘nerve impulses’ and is dependent back to claim 17 which recites ‘nerve impulses’ making it unclear if the recitation in claim 18 is meant to refer to the recitation in claim 17 or not. For examination purposes they will be treated as the same element.
Claim 19 recites the limitation ‘image data’ and is dependent back to claim 17 which recites ‘image data’ making it unclear if the recitation in claim 19 is meant to refer to the recitation in claim 17 or not. For examination purposes they will be treated as the same element.
Claim 19 recites the limitation ‘nerve impulses’ and is dependent back to claim 17 which recites ‘nerve impulses’ making it unclear if the recitation in claim 19 is meant to refer to the recitation in claim 17 or not. For examination purposes they will be treated as the same element.
Claim 20 recites the limitation ‘the images’ in Line 1. There is insufficient antecedent basis for this limitation in the claim.
The term “larger” in claim 20 is a relative term which renders the claim indefinite. The term “larger” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1
The claimed invention in claims 1-20 are directed to statutory subject matter as the claims recite a system (claims 9-16) and a method (claims 1-8 and 17-20).
Step 2A, Prong One
Regarding claims 1, 9, and 17, the recited steps are directed mental process of performing concepts in a human mind or by a human using a pen and paper (see MPEP 2106.04(a)(2) subsection (III)).
Specifically from claims 1 and 9:
a) displaying a first image to a subject having at least one eye and a brain having a visual processing region, the first image having corresponding first image data;
b) receiving, by at least one processor in communication with the visual processing region, first signals associated with nerve impulses transmitted to the visual processing region in response to the subject viewing the first image with the at least one eye;
c) modifying the first image to produce a second image, the second image having corresponding second image data;
d) displaying the second image to the subject;
e) receiving, by the at least one processor, second signals associated with nerve impulses transmitted to the visual processing region in response to the subject viewing the second image with the at least one eye; and
f) comparing a change between the first image data and the second image data with a change between the first signals and the second signals to identify matches between at least some of the signals and at least some of the image data.
and claim 17:
displaying a series of images, that includes a first image, to a subject having at least one eye and a brain having a visual processing region, each image in the series of images having corresponding image data,
wherein for any given image in the series of images after the first image, the given image is a modified version of the image preceding the given image in the series of images;
for each image in the series of images displayed to the subject, receiving, by at least one processor in communication with the visual processing region, signals associated with nerve impulses transmitted to the visual processing region in response to the subject viewing the image with the at least one eye; and
for each pair of consecutive images in the series of images, comparing a change between the image data of a first image of the pair of consecutive images and the image data of a second image of the pair of consecutive images with a change between the signals received in response to the subject viewing the first image and the signals received in response to the subject viewing the second image to generate a one-to-one matching between each element of a set of image data and a corresponding nerve impulse, the set of image data including the image data of at least one of the images in the series of images.
These underlined limitations describe a mental process (including an observation, evaluation, judgment, opinion) under the broadest reasonable standard, as a skilled practitioner is capable of performing the recited limitations and making a mental assessment thereafter. Examiner notes that nothing from the claims suggests that the limitations cannot be practically performed by a medical, biomedical or engineering professional with the aid of a pen and paper; their knowledge gained from education, background, or experience; or by using a generic computer as a tool to perform mental process steps in real time. Examiner additionally notes that nothing from the claims suggests and undue level of complexity that the mental process steps cannot be practically performed by a human with the aid of a pen and paper, or using a generic computer as a tool to perform the mental process steps.
Examples of ineligible claims that recite mental processes include:
• a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group, LLC v. Alstom, S.A.;
• claims to “comparing BRCA sequences and determining the existence of alterations,” where the claims cover any way of comparing BRCA sequences such that the comparison steps can practically be performed in the human mind, University of Utah Research Foundation v. Ambry Genetics Corp.
• a claim to collecting and comparing known information (claim 1), which are steps that can be practically performed in the human mind, Classen Immunotherapies, Inc. v. Biogen IDEC.
See p. 7-8 of October 2019 Update: Subject Matter Eligibility.
Step 2A, Prong Two
This judicial exceptions (abstract ideas) in claims 1-20 are not integrated into a practical application because:
•The abstract idea amounts to simply implementing the abstract idea on a computer. For example, the recitations regarding the generic computing components merely invoke a computer as a tool.
•The data-gathering steps do not add a meaningful limitation to the method as they are insignificant extra-solution activity.
•There is no improvement to a computer or other technology. “The McRO court indicated that it was the incorporation of the particular claimed rules in computer animation that "improved [the] existing technological process", unlike cases such as Alice where a computer was merely used as a tool to perform an existing process.” MPEP 2106.05(a) II. The claims recite a computer that is used as a tool to perform the abstract ideas.
•The claims do not apply the abstract idea to effect a particular treatment or prophylaxis for a disease or medical condition. Rather, the abstract idea is utilized to determine a relationship among data to provide a medical measurement.
•The claims do not apply the abstract idea to a particular machine. “Integral use of a machine to achieve performance of a method may provide significantly more, in contrast to where the machine is merely an object on which the method operates, which does not provide significantly more.” MPEP 2106.05(b). II. “Use of a machine that contributes only nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step or in a field-of-use limitation) would not provide significantly more.” MPEP 2106.05(b) III. The pending claims utilize a computer to perform abstract ideas. The claims do not apply the obtained response measurement to a particular machine.
When considered in combination, the additional elements (i.e. the generic computer functions and conventional equipment/steps) do not amount to significantly more than the abstract idea. Looking at the claim limitations as a whole adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
Step 2B
The additional elements are identified as follows: ‘displaying…image’ and ‘at least one processor’ in claims 1 and 17, ‘a computer usable non-transitory storage medium’, ‘suitably programmed system’, ‘displaying…image’, and ‘at least one processor’ in claim 9.
Those in the relevant field of art would recognize the above-identified additional elements as being well-understood, routine, and conventional means for data-gathering and computing, as demonstrated by
• Applicant's specification (Page 32) which discloses that the processor and memory comprise generic computer components that are configured to perform the generic computer functions that are well-understood, routine, and conventional activities previously known to the pertinent industry; and
• The prior art provided by the Applicant in the IDS and by the Examiner in PTO-892 which disclose each of the elements as being known and conventional in the art elements;
Thus, the claimed additional elements “are so well-known that they do not need to be described in detail in a patent application to satisfy 35 U.S.C. § 112(a).” Berkheimer Memorandum, III. A. 3. Furthermore, the court decisions discussed in MPEP § 2106.05(d)(ll) note the well-understood, routine and conventional nature of such additional elements as those claimed. See option III. A. 2. in the Berkheimer memorandum.
Use of a machine that contributes only nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step or in a field-of-use limitation) would not integrate a judicial exception into a practical application or provide significantly more. See Bilski, 561 U.S. at 610, 95 USPQ2d at 1009 (citing Parker v. Flook, 437 U.S. 584, 590, 198 USPQ 193, 197 (1978)), and CyberSource v. Retail Decisions, 654 F.3d 1366, 1370, 99 USPQ2d 1690 (Fed. Cir. 2011). See MPEP 2106.05(b).
Displaying of the images is merely extra-solutionary activity that must occur in order for the data gathering to occur.
Regarding the dependent claims, the dependent claims are directed to either 1) steps that are also abstract or 2) additional data output that is well-understood, routine and previously known to the industry or 3) further recite additional elements at a high level of generality which are conventional in the art.
• Claims 2-8, 10-16, 18-20 are steps that are also abstract as a mental process through additional data gathering or analysis
Although the dependent claims are further limiting, they do not recite significantly more than the abstract idea. A narrow abstract idea is still an abstract idea and an abstract idea with additional well-known equipment/functions is not significantly more than the abstract idea.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1 and 9 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kim et al. (US 2016/0113545).
Regarding claims 1 and 9, Kim teaches a method for generating a mapping that maps between nerve impulses and image data (Abstract),
a computer usable non-transitory storage medium having a computer program embodied thereon for causing a suitably programmed system to generate a mapping that maps between nerve impulses and image data, by performing the following steps when such program is executed on the system (Abstract; Paragraph 0073; Figure 2),
the method comprising the steps of: and the steps comprising:
a) displaying a first image to a subject having at least one eye and a brain having a visual processing region, the first image having corresponding first image data (Paragraph 0085; “the subjects viewed an image stimulus of the media facade of the Galleria department store for 10 seconds”);
b) receiving, by at least one processor in communication with the visual processing region, first signals associated with nerve impulses transmitted to the visual processing region in response to the subject viewing the first image with the at least one eye (Paragraph 0085; “then their EEGs were measured”);
c) modifying the first image to produce a second image, the second image having corresponding second image data (Paragraph 0085; “After a break, the subjects viewed an image stimulus of the media facade of Seoul Square for 10 seconds”; change in image constitutes modifying);
d) displaying the second image to the subject (Paragraph 0085; “After a break, the subjects viewed an image stimulus of the media facade of Seoul Square for 10 seconds”; change in image constitutes modifying);
e) receiving, by the at least one processor, second signals associated with nerve impulses transmitted to the visual processing region in response to the subject viewing the second image with the at least one eye (Paragraph 0085; “then their EEGs were measured.”); and
f) comparing a change between the first image data and the second image data with a change between the first signals and the second signals to identify matches between at least some of the signals and at least some of the image data (Paragraph 0087; “The EEG band activity analysis unit 134 compares EEG-band-specific activity ratios according to experimental image stimuli, and analyzes EEG-band-specific changes based on the average of band-to-band powers to examine the subjects' brain reactions to the experimental image stimuli of the media facades.”; Paragraph 0096; “In the analysis results of brain mapping images (FIG. 4), bands in which brain regions activated by the experimental image stimuli are similar to each other are distinguished from bands in which brain regions are significantly different from each other.”; Further Paragraph 0106).
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shen et al. (Reference U on PTO-892; 2019)
Regarding claims 1 and 9, Shen teaches a method for generating a mapping that maps between nerve impulses and image data (Abstract),
a computer usable non-transitory storage medium having a computer program embodied thereon for causing a suitably programmed system to generate a mapping that maps between nerve impulses and image data, by performing the following steps when such program is executed on the system (Abstract; analysis using machine learning would require a computer and non-transitory computer readable medium),
the method comprising the steps of: and the steps comprising:
a) displaying a first image to a subject having at least one eye and a brain having a visual processing region, the first image having corresponding first image data (stimulus image x; Training images; Figure 1; “The training images are presented to a human subject while brain activity is measured”; “Page 4, Right Column; “The image presentation experiments comprised four distinct types of sessions that corresponded to the four categories of stimulus images described above. In one training-session set (natural images), 1,200 images were each presented once. This set of training session was repeated five times. In each test session (natural image, artificial shape, and alphabetical letters), 50, 40, and 10 images were presented 24, 20, and 12 times each, respectively.”);
b) receiving, by at least one processor in communication with the visual processing region, first signals associated with nerve impulses transmitted to the visual processing region in response to the subject viewing the first image with the at least one eye (Figure 1; “The training images are presented to a human subject while brain activity is measured”; Page 4, Right Column; “The fMRI data obtained during the image presentation experiment were preprocessed for motion correction followed by co-registration to the within-session high-resolution anatomical images of the same slices and subsequently to T1-weighted anatomical images. The coregistered data were then reinterpolated as 2 × 2 × 2 mm voxels.”);
c) modifying the first image to produce a second image, the second image having corresponding second image data (Page 4, Left Column; “Xi denote the i th stimulus image in the dataset, Vi”; second image in the data set would be considered modified in that it is a different image from the first; Figure see Figure 1, Training images comprises multiple images; Page 4, Right Column; “The image presentation experiments comprised four distinct types of sessions that corresponded to the four categories of stimulus images described above. In one training-session set (natural images), 1,200 images were each presented once. This set of training session was repeated five times. In each test session (natural image, artificial shape, and alphabetical letters), 50, 40, and 10 images were presented 24, 20, and 12 times each, respectively.”);
d) displaying the second image to the subject (Training images; Figure 1; “The training images are presented to a human subject while brain activity is measured”; Page 4, Right Column; “The image presentation experiments comprised four distinct types of sessions that corresponded to the four categories of stimulus images described above. In one training-session set (natural images), 1,200 images were each presented once. This set of training session was repeated five times. In each test session (natural image, artificial shape, and alphabetical letters), 50, 40, and 10 images were presented 24, 20, and 12 times each, respectively.”);
e) receiving, by the at least one processor, second signals associated with nerve impulses transmitted to the visual processing region in response to the subject viewing the second image with the at least one eye (Figure 1; “The training images are presented to a human subject while brain activity is measured”; Page 4, Right Column; “The fMRI data obtained during the image presentation experiment were preprocessed for motion correction followed by co-registration to the within-session high-resolution anatomical images of the same slices and subsequently to T1-weighted anatomical images. The coregistered data were then reinterpolated as 2 × 2 × 2 mm voxels.”); and
f) comparing a change between the first image data and the second image data with a change between the first signals and the second signals to identify matches between at least some of the signals and at least some of the image data (Image Reconstruction Model section; discloses how the data for the images are all used together in the neural network layers thus constituting a comparison).
Regarding claims 2 and 10, Shen teaches further comprising: g) repeating steps c) through f), wherein for each repetition of step c) the first image is modified differently than as was modified in the previous execution of step c), and wherein step c) through f) are repeated until there is a one-to-one matching between each element of a set of image data and a corresponding nerve impulse, the set of image data including the second image data (Page 4, Right Column; “The image presentation experiments comprised four distinct types of sessions that corresponded to the four categories of stimulus images described above. In one training-session set (natural images), 1,200 images were each presented once. This set of training session was repeated five times. In each test session (natural image, artificial shape, and alphabetical letters), 50, 40, and 10 images were presented 24, 20, and 12 times each, respectively. The presentation order of the images was randomized across runs. The fMRI data obtained during the image presentation experiment were preprocessed for motion correction followed by co-registration to the within-session high-resolution anatomical images of the same slices and subsequently to T1-weighted anatomical images. The coregistered data were then reinterpolated as 2 × 2 × 2 mm voxels.”).
Regarding claims 3 and 11, Shen teaches wherein for each execution of step c) the first image is modified with incrementally larger changes (Page 4, Right Column; “The image presentation experiments comprised four distinct types of sessions that corresponded to the four categories of stimulus images described above. In one training-session set (natural images), 1,200 images were each presented once. This set of training session was repeated five times. In each test session (natural image, artificial shape, and alphabetical letters), 50, 40, and 10 images were presented 24, 20, and 12 times each, respectively. The presentation order of the images was randomized across runs. The fMRI data obtained during the image presentation experiment were preprocessed for motion correction followed by co-registration to the within-session high-resolution anatomical images of the same slices and subsequently to T1-weighted anatomical images. The coregistered data were then reinterpolated as 2 × 2 × 2 mm voxels.”; the first two repetitions as taught by Shen meet this limitation as the claim only requires one change as it is based on each execution which can imply that there is only two executions of the step).
Regarding claims 4 and 12, Shen teaches further comprising: h) storing the one-to-one matching as data that is descriptive of the mapping between nerve impulses and image data (Discussion: “we have demonstrated that end-to-end training of a DNN model can directly map fMRI activity in the visual cortex to stimuli observed during perception, and thus reconstruct perceived images from fMRI data. The reconstructions of natural images were highly similar to the perceived stimuli in shape, and in some cases in color”).
Regarding claims 5 and 13, Shen teaches wherein the data that is descriptive of the mapping between nerve impulses includes nerve impulse encoding values (Discussion: “we have demonstrated that end-to-end training of a DNN model can directly map fMRI activity in the visual cortex to stimuli observed during perception, and thus reconstruct perceived images from fMRI data. The reconstructions of natural images were highly similar to the perceived stimuli in shape, and in some cases in color”).
Regarding claims 6 and 14, Shen teaches wherein the data that is descriptive of the mapping between nerve impulses and image data is in the form of a configuration table that includes attributes of the image data in the set of image data (Figures 2 and 3; Image Reconstruction section).
Regarding claims 7 and 15, Shen teaches wherein the attributes include one or more of: color, intensity, position, or a nerve impulse encoding value (Figures 2 and 3; Image Reconstruction section).
Regarding claims 8 and 16, Shen teaches further comprising: g) performing steps a) through f) using a new first image that is different from the first image (Page 4, Right Column; “The image presentation experiments comprised four distinct types of sessions that corresponded to the four categories of stimulus images described above. In one training-session set (natural images), 1,200 images were each presented once. This set of training session was repeated five times. In each test session (natural image, artificial shape, and alphabetical letters), 50, 40, and 10 images were presented 24, 20, and 12 times each, respectively. The presentation order of the images was randomized across runs.).
Regarding claim 17, Shen teaches a method for generating a mapping that maps between nerve impulses and image data (Abstract), the method comprising:
displaying a series of images, that includes a first image, to a subject having at least one eye and a brain having a visual processing region, each image in the series of images having corresponding image data (stimulus image x; Training images; Figure 1; “The training images are presented to a human subject while brain activity is measured”; “Page 4, Right Column; “The image presentation experiments comprised four distinct types of sessions that corresponded to the four categories of stimulus images described above. In one training-session set (natural images), 1,200 images were each presented once. This set of training session was repeated five times. In each test session (natural image, artificial shape, and alphabetical letters), 50, 40, and 10 images were presented 24, 20, and 12 times each, respectively.”);
wherein for any given image in the series of images after the first image, the given image is a modified version of the image preceding the given image in the series of images (Page 4, Left Column; “Xi denote the i th stimulus image in the dataset, Vi”; second image in the data set would be considered modified in that it is a different image from the first; Figure see Figure 1, Training images comprises multiple images; Page 4, Right Column; “The image presentation experiments comprised four distinct types of sessions that corresponded to the four categories of stimulus images described above. In one training-session set (natural images), 1,200 images were each presented once. This set of training session was repeated five times. In each test session (natural image, artificial shape, and alphabetical letters), 50, 40, and 10 images were presented 24, 20, and 12 times each, respectively.”);
for each image in the series of images displayed to the subject, receiving, by at least one processor in communication with the visual processing region, signals associated with nerve impulses transmitted to the visual processing region in response to the subject viewing the image with the at least one eye (Figure 1; “The training images are presented to a human subject while brain activity is measured”; Page 4, Right Column; “The fMRI data obtained during the image presentation experiment were preprocessed for motion correction followed by co-registration to the within-session high-resolution anatomical images of the same slices and subsequently to T1-weighted anatomical images. The coregistered data were then reinterpolated as 2 × 2 × 2 mm voxels.”); and
for each pair of consecutive images in the series of images, comparing a change between the image data of a first image of the pair of consecutive images and the image data of a second image of the pair of consecutive images with a change between the signals received in response to the subject viewing the first image and the signals received in response to the subject viewing the second image to generate a one-to-one matching between each element of a set of image data and a corresponding nerve impulse, the set of image data including the image data of at least one of the images in the series of images (Image Reconstruction Model section; discloses how the data for the images are all used together in the neural network layers thus constituting a comparison).
Regarding claim 18, Shen teaches further comprising: storing the one-to-one matching as data that is descriptive of the mapping between nerve impulses and image data (Discussion: “we have demonstrated that end-to-end training of a DNN model can directly map fMRI activity in the visual cortex to stimuli observed during perception, and thus reconstruct perceived images from fMRI data. The reconstructions of natural images were highly similar to the perceived stimuli in shape, and in some cases in color”).
Regarding claim 19, Shen teaches wherein the data that is descriptive of the mapping between nerve impulses and image data includes attributes of the image data in the set of image data (Figures 2 and 3; Image Reconstruction section).
Regarding claim 20, Shen teaches wherein the images in the series of images are modified with incrementally larger changes when progressing from the first image in the series of images to a last image in the series of images (Page 4, Right Column; “The image presentation experiments comprised four distinct types of sessions that corresponded to the four categories of stimulus images described above. In one training-session set (natural images), 1,200 images were each presented once. This set of training session was repeated five times. In each test session (natural image, artificial shape, and alphabetical letters), 50, 40, and 10 images were presented 24, 20, and 12 times each, respectively. The presentation order of the images was randomized across runs. The fMRI data obtained during the image presentation experiment were preprocessed for motion correction followed by co-registration to the within-session high-resolution anatomical images of the same slices and subsequently to T1-weighted anatomical images. The coregistered data were then reinterpolated as 2 × 2 × 2 mm voxels.”; the first two repetitions as taught by Shen meet this limitation as the claim only requires one change as it is based on each execution which can imply that there is only two executions of the step).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Wang et al. (US Patent No. 11093033), Siwoff (US 2018/0192907), Ren (US 2008/0161915), Arndt (US 2009/0216091), and He et al. (Reference V on PTO-892)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK FERNANDES whose telephone number is (571)272-7706. The examiner can normally be reached Monday-Thursday 9AM-3PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JASON SIMS can be reached at (571)272-7540. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PATRICK FERNANDES/Primary Examiner, Art Unit 3791