DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
Information Disclosure Statement filed 06/17/2024 has been considered by examiner.
Priority
Acknowledgement is made of applicant’s claim for foreign priority. All claims have been examined using the effective filing date of 09/27/2021
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3 and 5-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more. The claims recite the following:
(claim 1) “An information processing apparatus (statutory subject matter) comprising at least one processor (generic computer), wherein the processor is configured to: acquire a plurality of images to which mutually independent attribute information is assigned (a person can acquire a plurality of images with different attributes assigned - mental process ); and limit the images, among the plurality of images, to images that are displayable on a display based on the attribute information. (a person can select or choose what images to display – mental process)”
(claim 2) “The information processing apparatus (statutory subject matter) according to claim 1 (mental process), wherein the processor (generic computer) is configured to cause only an image to which designated attribute information is assigned among the plurality of images to be displayable (a person can choose which image(s) to display based on their attribute information -mental process).”
(claim 3) The information processing apparatus (statutory subject matter) according to claim 1 (mental process), wherein: the plurality of images are a group of images that are spatially or temporally continuous (further describes the kind of images used in the mental process), and the processor (generic computer) is configured to cause only an image in a range determined based on designated attribute information among the plurality of images to be displayable (a person can use a range to design what images to display – mental process).
(claim 5) “The information processing apparatus (statutory subject matter) according to claim 1 (mental process), wherein: each of the plurality of images includes a region of interest (further describes the kind of images used in the mental process), and the attribute information indicates an attribute of the region of interest. (further describes the kind of attribute information used in the mental process)”
(claim 6) “The information processing apparatus (statutory subject matter) according to claim 5 (mental process), wherein the region of interest is a region of a structure included in the image (further describes the kind of images used in the mental process).
(claim 7) ”The information processing apparatus (statutory subject matter) according to claim 5 (mental process), wherein the region of interest is a region of an abnormal shadow included in the image (further describes the kind of images used in the mental process).”
(claim 8) “The information processing apparatus (statutory subject matter) according to claim 5 (mental process), wherein the region of interest is a region that is included in the image (further describes the kind of images used in the mental process) and that is designated by a user (a person can choose an area within an image as a specific region of interest -mental process).”
(claim 9) “The information processing apparatus (statutory subject matter) according to claim 5 (mental process), wherein the attribute information indicates a type of the region of interest. (further describes the kind of images used in the mental process)”
(claim 10) “The information processing apparatus (statutory subject matter) according to claim 5 (mental process), wherein the attribute information indicates a feature amount of the region of interest. (further describes the kind of images used in the mental process)”
(claim 11) “The information processing apparatus (statutory subject matter) according to claim 5 (mental process), wherein the processor is configured to: extract the region of interest with respect to each of the plurality of images (routine, conventional, well understood extra solution activity), and generate the attribute information based on a feature amount of the extracted region of interest. (further describes the kind of images used in the mental process)”
(claim 12) “The information processing apparatus (statutory subject matter) according to claim 11 (mental process), wherein the processor (generic computer) is configured to assign information that indicates an extraction method used for extracting the region of interest to an image from which the region of interest is extracted, as the attribute information. (a person can assign the extraction method as attribute information -mental process)”
(claim 13) “The information processing apparatus (statutory subject matter) according to claim 1 (mental process), wherein the attribute information indicates a purpose for which the image is captured. (further describes the kind of attribute information used in the mental process)”
(claim 14) “The information processing apparatus (statutory subject matter) according to claim 1 (mental process), wherein the attribute information is input by a user. (a person can be a user and add any attribute information to images -mental process)”
(claim 15) “An information processing method (statutory subject matter) comprising: acquiring a plurality of images to which mutually independent attribute information is assigned (a person can acquire a plurality of images with different attributes assigned - mental process ); and limiting the images, among the plurality of images, to images that are displayable on a display based on the attribute information. (a person can select or choose what images to display – mental process)”
(claim 16) “A non-transitory computer-readable storage medium (statutory subject matter) storing an information processing program causing a computer to execute a process (generic computer), the process comprising: acquiring a plurality of images to which mutually independent attribute information is assigned (a person can acquire a plurality of images with different attributes assigned - mental process ); and limiting the images, among the plurality of images, to images that are displayable on a display based on the attribute information. (a person can select or choose what images to display – mental process)”
The above claims are to statutory subject matter [Eligibility Step 1], however they are directed towards mental processes (abstract ideas) [Eligibility Step 2A Prong 1] that are not integrated into a practical application because additional limitations merely incorporate generic computer components as a tool to perform the process (see MPEP 2106.05 (f)) [Eligibility Step 2A Prong 2]. Furthermore, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because additional limitations merely incorporate generic computer components as a tool to perform the process (see MPEP 2106.05 (f)) [Eligibility Step 2B]. Claim 11 additionally includes the limitation “extract the region of interest with respect to each of the plurality of images” this is considered well understood routine and conventional extra solution activity (see MPEP 2106.05(d)(II)(iv)) and thus does not amount to significantly more [Eligibility Step 2B].
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 5-6,8-9, and 13-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by TSUJIMOTO (WO 2013099125 A1).
With respect to claim 1, Tsujimoto teaches an information processing apparatus (“The image processing system according to this embodiment comprises an image server 1201, an image processing apparatus 102 and a display apparatus 103. The image processing apparatus 102 acquires diagnostic image data which was acquired by imaging a test object, and generates display data to be displayed on the display apparatus” paragraph 102) comprising at least one processor (“The image processing apparatus 102 is a standard computer or workstation comprising such hardware resources as a CPU (Central Processing Unit)…” pages 4 (bottom) - 5 (top) paragraph 0014), wherein the processor is configured to: acquire a plurality of images assigned (“The image processing apparatus 102 acquires diagnostic image data which was acquired by
imaging a test object” Paragraph 102) to which mutually independent attribute information is assigned (“annotations are attached to diagnostic images stored in the image server 1201” paragraph 0103 ); and limit the images, among the plurality of images, to images that are displayable on a display based on the attribute information (see Fig 8 (annotation searching), Fig 13A (Search Result Display and Candidate Image Selection), Fig 13B (Displaying selected Image (s)), and Figure 14C for illustration of interface ).
With respect to claim 2, Tsujimoto teaches the information processing apparatus according to claim 1, wherein the processor is configured to cause only an image to which designated attribute information is assigned among the plurality of images to be displayable (“FIG. 14A is an example of the attribute information list displayed as the search result. The attribute information list 1401 includes the attribute information for each candidate position. The attribute information list 1401 includes a check box 1402 for selecting a corresponding candidate position from the list, and a sort button 1403 to perform sorting based on the attribute information. The user can select one or a plurality of candidate position(s) by selecting the corresponding check box 1402. If priority among the attribute information is set, a sort operation by a plurality of attribute information becomes possible.” Paragraph 0111 And “The display data is generated so that the diagnostic image, to which the annotation is attached, is displayed in the display position and at the display magnification, which were used when the annotation is attached to the candidate position selected in step S914. By performing this processing, an image reproducing the display image when the annotation was attached can be displayed.” Paragraph 0080 And figures 13A (Search Result Display and Candidate Image Selection), Fig 13B (Displaying selected Image (s))).
With respect to claim 3, Tsujimoto teaches the information processing apparatus according to claim 1, wherein: the plurality of images are a group of images that are spatially (“An image for detailed observation is displayed in the observation image display area 1005. In concrete terms, a part or all of the areas of the diagnostic image is/are displayed as an image for detailed observation at a set display magnification. The display magnification of the image for detailed observation is displayed in the section 1006 of the observation image display area 1005. The area of the test object to be observed in detail can be set or updated by a user's instruction via the externally connected input device, such as a touch panel or mouse 411. This setting or update is also possible by moving and zooming in/moving out (changing display magnification) of the currently displayed image. Each of the above mentioned areas may be created by dividing the display area of the general window 1001 by a single document interface, or each of the areas may be created as mutually different window areas by a multi-document interface. Page 23 paragraph 0082) or temporally continuous (“The attribute information is, for example, attached date and time and the type and version of diagnostic support software.” Paragraph 0097 and “In Embodiment 2 however, annotations are attached to diagnostic images stored in the image server 1201, and a target position is searched for in the diagnostic images stored in the image server 1201. Therefore a target position can be searched for in a plurality of diagnostic images. For example, a target position is searched for in a plurality of diagnostic images acquired
from one patient. Thereby the progress of one patient can be observed and a state of a same lesion can be easily compared at various locations. Further, by searching a plurality of diagnostic images for a target position, annotations matching with similar cases and conditions can be easily recognized.” Paragraph 0103), and the processor is configured to cause only an image in a range determined based on designated attribute information among the plurality of images to be displayable (“A list 1017 of date and time information included in the stored annotation data may be displayed in another window or as a dialog, so that the user selects the date and time to be the search key out of this list. A plurality of dates and times may be used as a search key, or a certain period may be used as a search key.” Paragraph 0087).
With respect to claim 5, Tsujimoto teaches the information processing apparatus according to claim 1, wherein: each of the plurality of images includes a region of interest (“In Embodiment 2 however, annotations are attached to diagnostic images stored in the image server 1201, and a target position is searched for in the diagnostic images stored in the image server 1201. Therefore a target position can be searched for in a plurality of diagnostic images. For example, a target position is searched for in a plurality of diagnostic images acquired from one patient. Thereby the progress of one patient can be observed and a state of a same lesion can be easily compared at various locations. Further, by searching a plurality of diagnostic images for a target position, annotations matching with similar cases and conditions can be easily recognized” paragraph 0103), and the attribute information indicates an attribute of the region of interest (“In Embodiment 2 however, annotations are attached to diagnostic images stored in the image server 1201, and a target position is searched for in the diagnostic images stored in the image server 1201. Therefore a target position can be searched for in a plurality of diagnostic images. For example, a target position is searched for in a plurality of diagnostic images acquired from one patient. Thereby the progress of one patient can be observed and a state of a same lesion can be easily compared at various locations. Further, by searching a plurality of diagnostic images for a target position, annotations matching with similar cases and conditions can be easily recognized” paragraph 0103).
With respect to claim 6, Tsujimoto teaches the information processing apparatus according to claim 5, wherein the region of interest is a region of a structure included in the image (“In concrete terms, the attribute information includes date and time information, user information, diagnostic information, and diagnostic criterion information.” paragraph 0055 page 16 and “The diagnostic criterion information is information summarizing the diagnostic classifications for each organ” paragraph 0055 page 17).
With respect to claim 8, Tsujimoto teaches the information processing apparatus according to claim 5, wherein the region of interest is a region that is included in the image (“In Embodiment 2 however, annotations are attached to diagnostic images stored in the image server 1201, and a target position is searched for in the diagnostic images stored in the image server 1201. Therefore a target position can be searched for in a plurality of diagnostic images. For example, a target position is searched for in a plurality of diagnostic images acquired from one patient. Thereby the progress of one patient can be observed and a state of a same lesion can be easily compared at various locations. Further, by searching a plurality of diagnostic images for a target position, annotations matching with similar cases and conditions can be easily recognized” paragraph 0103) and that is designated by a user (“The annotation data generating unit 305 acquires information on positional coordinates (coordinates of a position specified by the user (position where the annotation is attached) on the display screen (screen of the display apparatus 103) from the user input information acquiring unit 303. The annotation data generating unit 305 acquires display magnification information from the displaying apparatus information acquiring unit 304. Using this information, the annotation data generating unit 305 converts the positional coordinates on the display screen into positional coordinates on the diagnostic image.” Paragraph 28 (bottom)).
With respect to claim 9, Tsujimoto teaches the information processing apparatus according to claim 5, wherein the attribute information indicates a type of the region of interest (“In concrete terms, the attribute information includes date and time information, user information, diagnostic information, and diagnostic criterion information.” Paragraph 0055 lines 9-10 page 16 And “The diagnostic criterion information is information summarizing the diagnostic classifications for each organ, according to the actual situation of each country and each region. The diagnostic classification indicates each stage of each organ. In the case of stomach cancer, for example, a diagnostic classification specified by cancer classification code alpha, which is a diagnostic criterion for a region, may be different from a diagnostic classification specified by a cancer classification code beta, which is a diagnostic criterion for another region. Therefore information on the diagnostic criterion and the diagnostic classification used by the user for diagnosing the diagnostic image is attached to the attribute information as diagnostic criterion information. The diagnostic criterion and diagnostic classification will be described later with reference to FIG. 15.” Page 17 paragraph 0055 (middle) and Fig. 15A).
With respect to claim 13, Tsujimoto teaches the information processing apparatus according to claim 1, wherein the attribute information indicates a purpose for which the image is captured (“Attribute information is used for narrowing down annotations the observer (e.g. doctor, technician) should have an interest in (pay attention to), out of the many annotations attached to the diagnostic image, as mentioned later. Therefore any kind of information can be used as attribute information if the information is useful to narrow down (search) annotations. For example, information on a time when an annotation is attached or on an individual user who is attached as an annotation (an annotation is attached automatically by a computer or manually by an individual), and information on purpose, intention and viewpoint of attaching an annotation, can be used as attribute information.” Paragraph 0028 (top)).
With respect to claim 14, Tsujimoto teaches the information processing apparatus according to claim 1, wherein the attribute information is input by a user (“The annotation data generating unit 305 acquires information on positional coordinates (coordinates of a position specified by the user (position where the annotation is attached) on the display screen (screen of the display apparatus 103) from the user input information acquiring unit 303. The annotation data generating unit 305 acquires display magnification information from the displaying apparatus information acquiring unit 304. Using this information, the annotation data generating unit 305 converts the positional coordinates on the display screen into positional coordinates on the diagnostic image. Then the annotation data generating unit 305 generates annotation data, including text information inputted as an annotation (text data), the information on positional coordinates on the diagnostic image, the display magnification information, and the attribute information.” Paragraph 0028 (bottom)).
With respect to claim 15, Tsujimoto teaches an information processing method comprising: acquiring a plurality of images to which mutually independent attribute information is assigned (“In Embodiment 2 however, annotations are attached to diagnostic images stored in the image server 1201, and a target position is searched for in the diagnostic images stored in the image server 1201.” Paragraph 0103 (bottom)); and limiting the images, among the plurality of images, to images that are displayable on a display based on the attribute information (see Fig 8 (annotation searching), Fig 13A (Search Result Display and Candidate Image Selection), Fig 13B (Displaying selected Image (s)), and Figure 14C for illustration of interface ).
With respect to claim 16, Tsujimoto teaches a non-transitory computer-readable storage medium storing an information processing program causing a computer to execute a process (“The object of the present invention may be achieved by the following. That is, a recording medium (or storage medium) recording the software-based recording program codes, which implement all or a part of the functions of the above mentioned embodiments, is supplied to a system or an apparatus. Then a computer (or CPU or MPU) of the system or an apparatus reads and executes the program codes stored in the recording medium. In this case, the program codes read from the recording medium implement the functions of the above mentioned embodiments, and the recording medium recording the program codes constitutes the present invention.” Page 34 (bottom)-35 (top) paragraph 0115 ), the process comprising: acquiring a plurality of images to which mutually independent attribute information is assigned (“In Embodiment 2 however, annotations are attached to diagnostic images stored in the image server 1201, and a target position is searched for in the diagnostic images stored in the image server 1201.” Paragraph 0103 (bottom)); and limiting the images, among the plurality of images, to images that are displayable on a display based on the attribute information (see Fig 8 (annotation searching), Fig 13A (Search Result Display and Candidate Image Selection), Fig 13B (Displaying selected Image (s)), and Figure 14C for illustration of interface ).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Tsujimoto as applied to claim 1 above, and further in view of Sakane (US 20210019580 A1).
With respect to claim 4, Tsujimoto teaches the information processing apparatus according to claim 1, limit results based on the attribute information (“FIG. 14A is an example of the attribute information list displayed as the search result. The attribute information list 1401 includes the attribute information for each candidate position. The attribute information list 1401 includes a check box 1402 for selecting a corresponding candidate position from the list, and a sort button 1403 to perform sorting based on the attribute information. The user can select one or a plurality of candidate position(s) by selecting the corresponding check box 1402. If priority among the attribute information is set, a sort operation by a plurality of attribute information becomes possible.” Paragraph 0111 And “The display data is generated so that the diagnostic image, to which the annotation is attached, is displayed in the display position and at the display magnification, which were used when the annotation is attached to the candidate position selected in step S914. By performing this processing, an image reproducing the display image when the annotation was attached can be displayed.” Paragraph 0080 And figures 13A (S901 and A1307), 13B and 14C), but does not teach any further limitations.
Sakane teaches displaying a slider bar for receiving an operation of selecting an image to be displayed on the display among the plurality of images on the display (“A plurality of result images may be displayed for each of the trained models in a manner such that the plurality of result images are displayed by being arranged in order in, for example, each of the areas depicted in FIG. 13 in which result images are displayed. A slider bar may be displayed below the images so that only an image specified by the user among a plurality of result images can be selectively displayed paragraph 0094);
Sakane is analogous art in the same field of endeavor as the claimed invention. Sakane is directed towards an image display system (“The screen 200 depicted in FIG. 11 displays, below a model condition field 201, combinations of pieces of identification information (model IDs) of trained models, pieces of metadata (creator, cell type, method), and result images which are arranged in order, the number of combinations being equal to the number of trained models selected in step S12.” Paragraph 0078). A person of ordinary skill in the art before the effective filing date of the claimed invention, would have found it obvious to combine the system of Tsujimoto and Sakane by utilizing Sakane’s teachings of a scrollbar on a results display in combination with Tsujimoto’s results display, with the expectation that doing so would lead to an easier selection of information which would enable smoother pathological diagnostic operation and workflow (“…then understanding the nature of each annotation information and the selection of information become easier, and a pathological diagnostic operation can be smoother in each step of the pathological diagnosis work flow.” Page 16 (top) and 17 (bottom) paragraph 0055).
Claims 7, 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Tsujimoto as applied to claim 5 above, and further in view of Noguchi (US 20200170624 A1).
With respect to claim 7, Tsujimoto teaches the information processing apparatus according to claim 5, but does not teach the further limitations. Noguchi teaches wherein the region of interest is a region of an abnormal shadow included in the image (“The lesion detector 122 may also set a threshold value 411 for the smoothed distribution 401, detect a shadow region 430 caused by a nipple or a lesion, and a shadow region 431 caused by the probe 104 not in contact with the breast, and excludes the regions 430 and 431 from the filter map 420-2. The threshold value may be set in advance, or the average value of the smoothed distribution 401 may be used for the threshold value.” Paragraph 0055 ).
Noguchi is analogous art in the same field of endeavor as the claimed invention. Noguchi is directed towards “an image diagnostic apparatus and an image diagnostic method in the medical field.” (paragraph 0002). A person of ordinary skill in the art before the effective filing date of the claimed invention would have found it obvious to combine the teachings of Tsujimoto and Noguchi by utilizing Noguchi’s teachings of detecting regions of interest (lesions) as a part of Tsujimoto’s annotation generation system with the expectation that doing so would lead to the expansion of the Tsujimoto’s ability to collect and review diagnostic information (“The present invention is aiming at providing an apparatus and a method that can detect a lesion automatically with a high degree of accuracy in diagnosis using ultrasound images.” Paragraph 0008).
With respect to claim 10, Tsujimoto teaches the information processing apparatus according to claim 5, but does not teach any further limitations. Noguchi teaches wherein the attribute information indicates a feature amount of the region of interest (“The lesion detector 122 calculates an average value of feature amounts of a plurality of pixels included in the analysis layer for each analysis layer.” Paragraph 0049 and “Examples of the feature amount of the tomographic image include brightness, dispersion, texture, and co-occurrence feature.” Paragraph 0050).
Noguchi is analogous art in the same field of endeavor as the claimed invention. Noguchi is directed towards “an image diagnostic apparatus and an image diagnostic method in the medical field.” (paragraph 0002). A person of ordinary skill in the art before the effective filing date of the claimed invention would have found it obvious to combine the teachings of Tsujimoto and Noguchi by utilizing Noguchi’s teachings of detecting regions of interest (lesions) as a part of Tsujimoto’s annotation generation system with the expectation that doing so would lead to the expansion of the Tsujimoto’s ability to collect and review diagnostic information (“The present invention is aiming at providing an apparatus and a method that can detect a lesion automatically with a high degree of accuracy in diagnosis using ultrasound images.” Paragraph 0008).
With respect to claim 11, Tsujimoto teaches the information processing apparatus according to claim 5, but does not teach any further limitations. Noguchi teaches wherein the processor (see figure 1 120) is configured to: extract the region of interest with respect to each of the plurality of images (“The lesion detector 122 calculates an average value of feature amounts of a plurality of pixels included in the analysis layer for each analysis layer.” Paragraph 0049 and “Examples of the feature amount of the tomographic image include brightness, dispersion, texture, and co-occurrence feature.” Paragraph 0050), and generate the attribute information based on a feature amount of the extracted region of interest (“The lesion detector 122 calculates an average value of feature amounts of a plurality of pixels included in the analysis layer for each analysis layer.” Paragraph 0049 and “Examples of the feature amount of the tomographic image include brightness, dispersion, texture, and co-occurrence feature.” Paragraph 0050; brightness, dispersion, and texture as attribute information).
Noguchi is analogous art in the same field of endeavor as the claimed invention. Noguchi is directed towards “an image diagnostic apparatus and an image diagnostic method in the medical field.” (paragraph 0002). A person of ordinary skill in the art before the effective filing date of the claimed invention would have found it obvious to combine the teachings of Tsujimoto and Noguchi by utilizing Noguchi’s teachings of detecting regions of interest (lesions) as a part of Tsujimoto’s annotation generation system with the expectation that doing so would lead to the expansion of the Tsujimoto’s ability to collect and review diagnostic information (“The present invention is aiming at providing an apparatus and a method that can detect a lesion automatically with a high degree of accuracy in diagnosis using ultrasound images.” Paragraph 0008).
With respect to claim 12, Tsujimoto and Noguchi teach the information processing apparatus according to claim 11, wherein the processor is configured to assign information that indicates an extraction method used for extracting the region of interest to an image from which the region of interest is extracted, as the attribute information (“The user attribute is information to indicate a purpose (view point, role) or a method when each user attached an annotation, and possible examples of the user attribute are "pathologist", "technician", "clinician" and "automatic diagnosis". If the user attribute is associated with the annotation as one of the above mentioned user information such that the search can be performed by the user attribute, then understanding the nature of each annotation information and the selection of information become easier, and a pathological diagnostic operation can be smoother in each step of the pathological diagnosis work flow” page 16 paragraph 0044 (bottom)).
Noguchi teaches wherein the processor (see Figure 1 element 120) is configured to: extract the region of interest with respect to each of the plurality of images (“The lesion detector 122 calculates an average value of feature amounts of a plurality of pixels included in the analysis layer for each analysis layer.” Paragraph 0049 and “Examples of the feature amount of the tomographic image include brightness, dispersion, texture, and co-occurrence feature.” Paragraph 0050)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REBECCA C WILLIAMS whose telephone number is (571)272-7074. The examiner can normally be reached M-F 7:30am - 4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew W Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/REBECCA COLETTE WILLIAMS/Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677