DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election of species (c) for the first species election and species (g) for the second species election, corresponding to claims 1-6, 11, 14-16, and 19, in the reply filed on 11 July 2025 is acknowledged.
Because applicant did not distinctly and specifically point out the supposed errors in the restriction requirement, the election has been treated as an election without traverse (MPEP § 818.01(a)).
Claim Objections
Claims 2, 3, and 14 are objected to because of the following informalities:
Claim 2 recites “wherein the photoacoustic probes include….” However, claim 1 only recites “a photoacoustic probe” and does not recite an additional probe or probes. Examiner is interpreting this recitation as “wherein the photoacoustic probe includes…”
Claim 3 recites “a first linear encoder that generates first-direction linear motion” but the term “first-direction linear motion” was already recited in claim 1.
Claim 14 recites “which is a direction from the first end to the second end of the n-th scan line,….” However, there is no prior reference to “first end” or “second end” or “n-the scan line” or any “scan line” in claim 14. Examiner is interpreting this recitation as “which is a direction from a first end to a second end of a scan line...”
Appropriate correction is required.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “ultrasonic transceiving unit” recited in claim 1.
Because these claim limitation(s) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
With respect to, “ultrasonic transceiving unit,” this element is interpreted under 35 U.S.C. 112(f) as having a pulser and a receiver disposed within an ultrasonic probe and also including, optionally, an amplifier. (see [0049] and [0056] of Applicant’s disclosure). The term “unit” is separately defined at [0035] (e.g., “the term ‘unit’ used herein refers to software or a hardware element such as…”) but the description there does not appear to be applicable to the claimed function.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-6, 11, 14-16, and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “substantially” in claim 1 is a relative term which renders the claim indefinite. The term “substantially” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim 3 includes multiple references to “set interval” and “reference pulse signal” but relative to different elements. For example, claim 3 recites: a pulse signal generator that generates and outputs reference pulse signals at a set interval; a laser generator that outputs a laser pulse at a set interval to the to-be-examined object according to a reference pulse signal and the first-direction linear motion information; and a trigger controller that generates an output trigger signal at a set interval according to the reference pulse signal and the first-direction linear motion information.
It is not clear if the “set interval” at which the laser generator outputs is the same as the “set interval” that the pulse signal generator generates reference pulse signals and/or the same as the “set interval” at which the trigger controller outputs a signal.
It is not clear if the “reference pulse signal” that determines the output of pulses from the laser generator is the same as the “reference pulse signals” generated by the pulse signal generator. Likewise, it is not clear if the “reference pulse signal” that determines the output of pulses of the trigger signal from the trigger controller is the same as the “reference pulse signals” generated by the pulse signal generator.
Clarification is required. Claim 4 is rejected based upon its dependency from claim 3.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 5, 6, 11, 14, 15, and 19 are rejected under 35 U.S.C. 102(a)(1) as anticipated by U.S. Patent Appl. Publ. No. 2014/0024918A1 to Kazuhiro Hirota (hereinafter “HIROTA”).
With respect to claim 1, HIROTA discloses an apparatus for inputting a combined photoacoustic/ultrasonic image (see, e.g., [0101] and ultrasound image reconstruction means 26, photoacoustic image reconstructing means 25, and image combining means 27 in Figure 1), which generates three-dimensional (3D) image information for a to-be-examined object by performing two-dimensional (2D) scanning on the to-be-examined object by a first-direction linear motion of a photoacoustic probe and a second-direction linear motion that is substantially perpendicular to the first-direction linear motion (see, e.g., Figures 4 and 5 and [0101] and [0092]: “Note that when photoacoustic images or ultrasonic images of subjects are obtained, the probe 11 is moved in a direction substantially perpendicular to the direction in which the ultrasonic transducers and the end portions of the light guiding means extend, to thereby two dimensionally scan the subjects with the laser beam and the ultrasonic waves.”, the apparatus comprising:
a photoacoustic probe that outputs a laser pulse output (see, e.g., laser unit 13) or an ultrasonic output (see, e.g., probe 11) to a to-be-examined object, receives a first ultrasonic input from the to-be-examined object by the laser pulse output (see, e.g., [0091]: “The probe 11 also detects ultrasonic waves (acoustic waves) which are generated by targets of measurement within subjects absorbing the laser beam output by the laser unit 13.” (emphasis added)), and receives a second ultrasonic input from the to-be-examined object by the ultrasonic output (see, e.g., [0091]: “The probe 11 further outputs (transmits) ultrasonic waves to subjects as a type of acoustic wave, and detects (receives) reflected ultrasonic waves reflected by the subjects.” (emphasis added). NOTE: Although Figure 1 of HIROTA shows the probe 11 and the laser light source unit 13 as separate elements, HIROTA describes the two as being combined as parts of the same component. See above and also [0091] which describes features of each being arranged together: “The end portions of the light guiding means, that is, the leading end portions of a plurality of optical fibers or the like, are arranged along the arrangement direction of the plurality of ultrasonic transducers, and the laser beam is irradiated toward the subjects therefrom.” (emphasis added).
an ultrasonic transceiving unit (see, e.g., [0094] and ultrasonic wave unit 12) that generates an ultrasonic output signal for generating the ultrasonic output (see, e.g., “TRANSMITTED SIGNALS” extending from the transmission control circuit 12 of the ultrasonic wave unit 12 to the ultrasound probe 11 in Figure 1), receives the first ultrasonic input and the second ultrasonic input (see, e.g., [0094] and “RECEIVED SIGNALS” extending from the probe 11 toward the receiving circuit 21 of the ultrasonic wave unit 12 in Figure 1), and generates a photoacoustic image signal and an ultrasonic image signal, respectively (see, e.g., [0094] and arrow representing data from receiving circuit 21 to A/D converting means 22 in Figure 1 that ultimately becomes the ultrasound image reconstructing means 26 and the photoacoustic image reconstruction means 25 in Figure 1);
an analog/digital (A/D) converter that receives the photoacoustic image signal and the ultrasonic image signal to convert the same into digital image signals (see, e.g., A/D converting means 22 in Figure 1 and [0094]); and
a main controller that receives the digital image signals (see, e.g., tomographic image generating section 70 receiving data from ultrasound image reconstruction means 26 and photoacoustic image reconstruction means 25 and also projection image generating section 60 receiving data from photoacoustic image reconstruction means 25 in Figure 1), generates photoacoustic image information and ultrasonic image information for the to-be-examined object (see, e.g., tomographic image generating section 70 and projection image generating section 60 transmitting data to image combining means 27 in Figure 1), and combines the photoacoustic image information and ultrasonic image information to generate a combined photoacoustic/ultrasonic image (see, e.g., image combining means 27 sending data to image display means 14 in Figure 1). NOTE: Examiner is interpreting “main controller” as collectively the components 23, 24, 24, 26, 70, 80, and 27 in Figure 1.
With respect to claim 2, HIROTA discloses that the photoacoustic probes include a laser output unit that outputs the laser pulse output to the to-be-examined object, and an ultrasonic probe that outputs the ultrasonic output to the to-be-examined object and receives the first ultrasonic input and the second ultrasonic input, respectively. HIROTA teaches a common probe (see, probe 11 in Figure 4) that includes elements of a photoacoustic probe and elements of an ultrasonic probe. “The laser beam output by the laser unit 13 is guided to the probe 11 by a light guiding means such as an optical fiber, then irradiated onto subjects from the probe 11.” ([0090]). “The probe 11 further outputs (transmits) ultrasonic waves to subjects as a type of acoustic wave, and detects (receives) reflected ultrasonic waves reflected by the subjects. The probe 11 has a plurality of ultrasonic transducers which are arranged one dimensionally, for example. The probe 11 also detects ultrasonic waves (acoustic waves) which are generated by targets of measurement within subjects absorbing the laser beam output by the laser unit 13. The end portions of the light guiding means, that is, the leading end portions of a plurality of optical fibers or the like, are arranged along the arrangement direction of the plurality of ultrasonic transducers, and the laser beam is irradiated toward the subjects therefrom.”
With respect to claim 5 (depending from claim 2), HIROTA discloses wherein the ultrasonic probe outputs an ultrasonic output corresponding to an output trigger signal generated by the trigger controller (see, e.g., [0097]: “[T]he trigger control circuit 28 outputs an ultrasonic wave trigger signal that commands ultrasonic wave transmission to the transmission control circuit 30.”) and receives the ultrasonic input corresponding to the first-direction linear motion information according to the output trigger signal (see Figure 1: the ultrasonic probe 11 detects the “ultrasonic wave” from the subject while receiving data from the probe scanning mechanism 15 that is based on a signal from the trigger control circuit 28).
With respect to claim 6, HIROTA discloses wherein in the main controller (see, e.g., ultrasonic wave unit 12 in Figure 1 including both ultrasound and photoacoustic image reconstruction means 25, 26), photoacoustic image information for the to-be-examined object is generated by sequentially combining the photoacoustic digital image signal (see, e.g., photoacoustic image reconstruction means 25. “The data separating means 24 provides the separated acoustic wave detection signals to the photoacoustic image reconstructing means 25….” ([0100]) with the photoacoustic image information corresponding to each trigger pulse of the output trigger signal in a positive direction or a negative direction of the first direction in units of photoacoustic digital image scan lines, and ultrasonic image information for the to-be-examined object is generated by sequentially combining the ultrasonic digital image signal (see, e.g., ultrasound image reconstruction means 26. “The data separating means 24…provides the separated ultrasonic wave signals to the ultrasound image reconstructing means 26.” ([0100]) with the ultrasonic image information corresponding to each trigger pulse of the output trigger signal in the positive or negative direction of the first direction in units of scan lines.
With respect to the “sequentially combining” for each of the photoacoustic and ultrasound images, “[t]he photoacoustic image reconstructing means 25 adds data from 64 ultrasonic transducers of the probe 11 at delay times corresponding to the positions of the ultrasonic transducers, to generate data corresponding to a single line (delayed addition method), for example….The ultrasound image reconstructing means 26 also generates data corresponding to each line of ultrasound images, which are tomographic images, from data generated based on the ultrasonic wave detection signals.” ([0102]).
With respect to the phrase “corresponding to each trigger pulse of the output trigger signal in a positive direction or a negative direction of the first direction in units of scan lines” for each of the photoacoustic and ultrasound images, as described above, each of the image reconstructing means 25, 26 generate data corresponding to each line. ([0102]). Moreover, each trigger pulse would necessarily occur either in the positive or negative direction of the “first direction.” As shown Figure 4, the probe 11 moves in a linear direction (i.e., first direction) and can only go forward (positive) or backward (negative), as indicated by the double-headed arrow.
With respect to claim 11, HIROTA discloses an output selector for selecting the laser pulse output or the ultrasonic output so as to be output. “The trigger control circuit 28 outputs the light trigger signal first, and then outputs the ultrasonic wave trigger signal thereafter. That is, the trigger control circuit 28 outputs the ultrasonic wave trigger signal following output of the light trigger signal. Irradiation of a laser beam onto a subject and detection of acoustic waves are performed by the light trigger signal being output, and transmission of ultrasonic waves toward the subject and detection of reflected ultrasonic waves are performed thereafter by output of the ultrasonic wave trigger signal.” ([0097]).
With respect to claim 14, HIROTA also discloses wherein while moving the photoacoustic probe in the first direction, which is a direction from the first end to the second end of the n-th scan line, the laser pulse output and the ultrasonic output are alternately performed in the photoacoustic probe. “The trigger control circuit 28 outputs the light trigger signal first, and then outputs the ultrasonic wave trigger signal thereafter. That is, the trigger control circuit 28 outputs the ultrasonic wave trigger signal following output of the light trigger signal. Irradiation of a laser beam onto a subject and detection of acoustic waves are performed by the light trigger signal being output, and transmission of ultrasonic waves toward the subject and detection of reflected ultrasonic waves are performed thereafter by output of the ultrasonic wave trigger signal.” ([0097]).
With respect to claim 15, HIROTA discloses wherein 3D image information for the to-be-examined object are generated by one-time 2D scanning of the photoacoustic probe, and the first ultrasonic input and the second ultrasonic input are alternately performed within each scanning line. “That is, the trigger control circuit 28 outputs the ultrasonic wave trigger signal following output of the light trigger signal. Irradiation of a laser beam onto a subject and detection of acoustic waves are performed by the light trigger signal being output, and transmission of ultrasonic waves toward the subject and detection of reflected ultrasonic waves are performed thereafter by output of the ultrasonic wave trigger signal.” ([0097]).
With respect to claim 19, HIROTA discloses, with respect to the n-th scan line, the photoacoustic image information and the ultrasonic image information are alternately received - “Following output of the light trigger signal, the trigger control circuit 28 outputs an ultrasonic wave trigger signal at a timing that detection of acoustic waves is completed. At this time, the A/D converting means 22 does not interrupt sampling of ultrasonic wave signals, but continues to execute sampling. In other words, the trigger control circuit 28 outputs the ultrasonic wave trigger signal in a state in which the A/D converting means 22 is continuing sampling of the ultrasonic wave signals. The target of detection of the probe 11 changes from acoustic waves to reflected ultrasonic waves, by the probe 11 transmitting ultrasonic waves in response to the ultrasonic wave trigger signal.” ([0099]) (emphasis added) - while moving the photoacoustic probe in the first direction. While scanning, “the probe 11 moves in the direction of arrow W in FIG. 4.” ([0092])
HIROTA discloses, with respect to an (n+1)-th scan line, the photoacoustic image information and the ultrasonic image information are alternately received while moving the photoacoustic probe. As described immediately above, the photoacoustic image information is provided first followed by the ultrasonic image information while the probe is moved. With respect to the probe moving, for an (n+1)-th scan line, in a direction opposite to the first direction the photoacoustic image information. While scanning, “the probe 11 moves in the direction of arrow W in FIG. 4.” However, the arrow in Figure 4 is a double-headed arrow indicating that the probe moves in a first direction until it reaches an end and then moves in a second direction that is opposite the first direction. “When the motor 84 of the scanning mechanism is driven, the threaded engagement portion 80 which is in threaded engagement with the rotating bore screw 82 advances and retreats, the probe 11 moves in the direction of arrow W in FIG. 4, and the two dimensional scanning described above is realized.” ([0092]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3, 4, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Appl. Publ. No. 2014/0024918A1 to Kazuhiro Hirota (hereinafter “HIROTA”) as applied to claim 1 above, and further in view of U.S. Patent Publ. No. 2011/0201914 A1 to Wang et al. (hereinafter referred to as “WANG”) and U.S. Patent Publ. No 2021/0363579 A1 to Daugharthy et al. (hereinafter “DAUGHARTHY”).
With respect to claim 3, HIROTA teaches a laser generator that outputs a laser pulse at a set interval to the to-be-examined object according to a reference pulse signal. (see, e.g., laser unit 13 and [0095]: “The trigger control circuit 28 outputs a Q switch trigger signal after the flash lamp 32 sufficiently pumps the Q switch pulse laser, for example. The Q switch is turned ON when the Q switch trigger signal is received, and causes a laser beam to be output from the laser unit 13.”).
HIROTA also teaches a trigger controller that generates an output trigger signal (see, e.g., trigger control circuit 28, which outputs sampling trigger signal, output ultrasonic wave trigger, light trigger signal, and Q switch trigger signal as shown in Figure 1).
However, HIROTA does not explicitly teach that the laser generator outputs a laser pulse according to first-direction linear motion information. HIROTA also does not explicitly teach that the trigger controller generates the output trigger signal at a set interval according to the reference pulse signal and the first-direction linear motion information. HIROTA also does not explicitly teach the pulse signal generator and the first linear encoder as recited in claim 3.
In the same field of endeavor, WANG teaches method and systems that combine “ultrasound-based high-resolution in vivo micro-imaging systems, such as…photoacoustic imaging using a flexibly mounted cantilever beam.” ([0006]). “To incorporate photoacoustic imaging into an ultrasonic scanning system or imaging system 100, a photoacoustic excitation source, such as a tunable pulsed dye laser, and a light delivery system are introduced to the ultrasonic scanning system 100 as shown in FIG. 1…The laser must be synchronized with the imaging system 100. In the exemplary embodiment, the imaging system 100 interlaces trigger pulses between the laser and the ultrasonic pulser.” (emphasis added) ([0043]). With this purpose in mind, WANG teaches a “data acquisition subsystem 305 [that] produces a clock signal to synchronize all electronic components of the photoacoustic device.” (emphasis added) ([0048]).
Accordingly, WANG teaches a pulse signal generator that generates and outputs reference pulse signals at a set interval. The data acquisition subsystem of WANG generates and outputs reference pulse signals (i.e., a “clock signal to synchronize all electronic components” as taught in WANG). NOTE: Applicant does not define “pulse signal generator.” Examiner is interpreting the data acquisition subsystem of WANG as either constituting or including a pulse signal generator.
WANG also teaches that a laser generator can output a laser pulse according to first-direction linear motion information. (see, e.g., tunable laser 302 at [0046]: “At transducer locations [i.e., first-direction linear motion information] predefined by the data-analyzing computer 307, the motor controller generates trigger pulses synchronized with the clock signal [i.e., reference pulse signals], which are used to trigger the pulse laser and start the data acquisition sequence.”) (emphasis added). See also claim 6 of WANG: “further comprising…measuring a current location of the transducer using a motor controller, said focusing at least one light pulse comprises emitting the at least one light pulse at a predefined transducer location.”
WANG also teaches that the trigger controller can generate the output trigger signal at a set interval according to the reference pulse signal and the first-direction linear motion information. (see, e.g., [0046]: “At transducer locations [i.e., first-direction linear motion information] predefined by the data-analyzing computer 307, the motor controller generates trigger pulses synchronized with the clock signal [i.e., at a set interval according to the reference pulse signals], which are used to trigger the pulse laser and start the data acquisition sequence.”) See also claim 6 of WANG: “further comprising…measuring a current location of the transducer using a motor controller, said focusing at least one light pulse comprises emitting the at least one light pulse at a predefined transducer location.”
It would have been obvious to one having ordinary skill in the art to configure the HIROTA system to include a pulse signal generator that generates and outputs reference pulse signals for synchronizing the various components of the system, such as the laser generator and the trigger controller of HIROTA. In photoacoustic ultrasound imaging, it is important to correctly time the laser output followed by emissions from the tissue and then detection of those emissions (WANG at [0043]). As taught in WANG, providing a pulse generator that outputs a common reference signal for the relevant components (e.g., laser generator and trigger controller) would help synchronize their operation. In addition to using a common reference signal, configuring the laser generator and the trigger controller to operate according to motion information of the probe (e.g., location or direction) would not only help synchronize their operation but also increase the likelihood that the detected emissions are properly located within the image. There would be a reasonable expectation of success because, as taught in WANG, pulse generators can be applied to synchronize systems, such as photoacoustic and ultrasound systems.
However, WANG does not explicitly teach a first linear encoder that generates first-direction linear motion information of the photoacoustic probe.
DAUGHARTHY teaches an automated apparatus for sequencing and volumetric imaging. (Abstract). The system includes a stage for retaining a sample holder. ([0079]). As can be seen in Figures 4 and 6, the operation of DAUGHARTHY’s apparatus involves careful timing and movement of the stage relative to optical components used for laser scanning. (see, e.g., [0111]). “FIG. 6 is a schematic timing diagram illustrating an example of coordination between optical imaging, illumination, motion control, and acquisition systems for the purposes of achieving optimal imaging frame rates.” ([0113]). With this in mind, DAUGHARTHY uses various encoders for providing precise position information. “Track for stage motion with linear encoders for positional feedback is provided.” ([0110], see also [0092] and [0111]).
It would have been obvious to one having ordinary skill in the art to use a linear encoder to provide motion information. It is important to know the location of the probe when it detects emissions so that those emission will be correctly positioned within the image. (see, e.g., [0102] of HIROTA). Moreover, HIROTA involves two-dimensional scanning of the probe (see, e.g., Figure 5 of HIROTA). As such, one would be motivated to add a linear encoder to provide the motion information as the probe moves along one of the dimensions. There would be a reasonable expectation of success because, as taught in DAUHARTHY, linear encoders can be used to provide precise position data for optical components used with laser scanning.
With respect to claim 4, HIROTA-WANG teach that the ultrasonic transceiving unit generates the photoacoustic image signal and ultrasonic image signal corresponding to the first-direction linear motion information according to the output trigger signal, respectively. As described above, WANG teaches detecting the emissions caused by the laser and detecting the emissions caused by ultrasound while knowing the location of the probe. (see, e.g., [0046]). The trigger control circuit of HIROTA controls both detecting the emissions caused by the laser and detecting the emissions caused by ultrasound. “Irradiation of a laser beam onto a subject and detection of acoustic waves are performed by the light trigger signal being output, and transmission of ultrasonic waves toward the subject and detection of reflected ultrasonic waves are performed thereafter by output of the ultrasonic wave trigger signal.” ([0097] of HIROTA).
With respect to claim 16, HIROTA teaches scanning in two-dimensions. See, e.g., Figures 4 and 5 of HIROTA and [0092]: “Note that when photoacoustic images or ultrasonic images of subjects are obtained, the probe 11 is moved in a direction substantially perpendicular to the direction in which the ultrasonic transducers and the end portions of the light guiding means extend, to thereby two dimensionally scan the subjects with the laser beam and the ultrasonic waves.” WANG teaches that it is important to know the location of the probe when detecting emissions from tissue. ([0046] of WANG).
DAUGHARTHY teaches using a second linear encoder for generating the second-direction linear motion information. “The stage may be positioned using high-precision motion control systems, such as linear servo motors with optical encoders. Optical encoder systems may be furnished in either absolute or relative formats. In the case where the motor controller reads one or more relative encoders, homing routines may be implemented relative to limit sensors and/or physical limits to provide repeatable axis positioning.” ([0079]). “A sample stage with temperature control and XY motion system is provided and is shown with a glass slide sample holder. A track for stage motion with linear encoders is operatively connected to the stage.” ([0109]).
DAUGHARTHY teaches using a memory for storing plane coordinate values of the [probe] (DAUGHARTHY teaches storing position information of the objective lens (i.e., the probe) at, e.g., “wherein the objective lens is positioned at position k, and a Z offset is determined and stored.” ([0119]), determined by the first-direction linear motion information and the second-direction linear motion information. “A microcontroller system coordinates motion systems in XYZ axes along with the illumination/excitation light source and camera sensor such that the motion systems position the sample relative to the optical system, then image data is acquired at that position with global shutter capture or rolling shutter or “all lines firing” rolling shutter capture using synchronized illumination.” ([0094]). “In these cases, a three-dimensional coordinate system based on physical or engineering units may be employed to dictate arbitrary three-dimensional imaging positions that are then disseminated to the participating motion control and imaging systems. Employing such a system allows for imaging that is constrained exclusively to the region of a three-dimensional matrix in which sample voxels of interest exist.” ([0103]). See also Figure 11 store corrected positions of device or sample.
It would have been obvious to one having ordinary skill in the art to use a second linear encoder to provide motion information and to store plane coordinates of the probe, and a memory for storing plane coordinate values of the photoacoustic probe, determined by the first-direction linear motion information and the second-direction linear motion information, photoacoustic image information at the plane coordinate values, and ultrasonic image information for the plane coordinate values of the photoacoustic probe and the to-be-examined object at the plane coordinate values. The purpose of photoacoustic ultrasound imaging is to map emission data (e.g., emissions signals from the tissue caused by laser or ultrasound) to a two-dimensional or three-dimensional space. With that purpose in mind, one skilled in the art would configure the system to use two linear encoders to identify locations along the XY plane and provide that location data along with emission data for generating 2D and 3D images that include the “photoacoustic image information at the plane coordinate values” and the “ultrasonic image information for the plane coordinate values.” There would be a reasonable expectation of success because, as taught in DAUHARTHY, linear encoders and imaging systems can be used to provide precise position data for optical components used with laser scanning.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON P GROSS whose telephone number is (571)272-1386. The examiner can normally be reached Monday-Friday 9:00-5:00CT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne M. Kozak can be reached at (571) 270-5284. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASON P GROSS/ Examiner, Art Unit 3797
/SERKAN AKAR/ Primary Examiner, Art Unit 3797