DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Preliminary Remarks
This is a reply to the application filed on 02/28/2025, in which, claims 1-20 remain pending in the present application with claims 1, 15, and 20 being independent claims.
When making claim amendments, the applicant is encouraged to consider the references in their entireties, including those portions that have not been cited by the examiner and their equivalents as they may most broadly and appropriately apply to any particular anticipated claim amendments.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on March 26, 2025 and February 5, 2026 are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
An Examiner is obliged to give claims their broadest reasonable interpretation consistent with the specification during examination. The broadest reasonable interpretation of a claim drawn to a computer executing instructions typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer executing instructions, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal, per se, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter.
For claim 20, the claimed invention is directed to non-statutory subject matter. The claimed "A computer-readable storage medium" does/do not fall within at least one of the four categories of patent eligible subject matter because the claim(s) is/are directed to a signal per se, mere information in the form of data, a contract between two parties, or a human being.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Afrouzi et al. (US 11348269 B1, hereinafter referred to as “Afrouzi”) in view of Manivannan et al. (US 20220084210 A1, hereinafter referred to as “Manivannan”).
Regarding claim 1, Afrouzi discloses an infrared image processing method, comprising:
acquiring an infrared image (see Afrouzi, Column 141, lines 38-40: “the images captured may be infrared images. Such images may capture live objects, such as humans and animals”);
performing target detection on the infrared image to determine one or more target objects within the infrared image and obtaining a target detection result that include a category of each of the one or more target objects (see Afrouzi, Column 162, lines 5-15: “robot may detect a human (or other objects having different material and texture) using diffraction. In some cases, the robot may use a spectrometer, a device that harnesses the concept of diffraction, to detect objects, such as humans and animals. A spectrometer uses diffraction (and the subsequent interference) of light from slits to separate wavelengths, such that faint peaks of energy at specific wavelengths may be detected and recorded. Therefore, the results provided by a spectrometer may be used to distinguish a material or texture and hence a type of object. For example, output of a spectrometer may be used to identify liquids, animals, or dog incidents. In some embodiments, detection of a particular event by various sensors of the robot or other smart devices within the area in a particular pattern or order may increase the confidence of detection of the particular event”);
determining a target object of interest from the one or more target objects based on the target detection results (see Afrouzi, Column 183, lines 28-35: “images of the environment captured by a camera of the robot may be used by the processor to identify objects observed, extract features of the objects observed (e.g., shapes, colors, size, angles, etc.), and determine the type of objects observed based on the extracted features”);
determining a contour area of the target object of interest (see Afrouzi, Column 190, lines 20-30: “the processor may identify objects by identifying particular geometric features associated with different objects. In some embodiments, the processor may describe a geometric feature by defining a region R of a binary image ... the processor may describe a perimeter P of the region R by defining the region as the length of its outer contour, wherein R is connected”);
performing true-color processing on the contour area (see Afrouzi, Column 198, lines 10-15: “processor may represent color images by using an array of pixels in which different models may be used to order the individual color components. In embodiments, a pixel in a true color image may take any color value in its color space and may fall within the discrete range of its individual color components”).
Regarding claim 1, Afrouzi discloses all the claimed limitations with the exception of based on the category of the target object of interest to highlight the target object of interest in the infrared image.
Manivannan from the same or similar fields of endeavor discloses based on the category of the target object of interest to highlight the target object of interest in the infrared image (see Manivannan, paragraphs [0091]-[0092]: “The three color images can be combined to display the true color image or they can be displayed individually to highlight different features of the retina … The fundus imaging system can also provide an infrared (IR) reflectance image, such as by using an infrared laser (or other infrared light source)”).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Manivannan with the teachings as in Afrouzi. The motivation for doing so would ensure the system to have the ability to use the automated segmentation and identification system/method for identifying geographic atrophy (GA) phenotypic patterns in fundus autofluorescence images disclosed in Manivannan to display the true color image to highlight different features of the retina individually and to provide an infrared (IR) reflectance image by using an infrared laser thus highlighting the target object of interest in the infrared image based on the category of the target object of interest in order to enable the target object of interest to be displayed more prominently with different colors and textures according to its category so that the target object of interest can be presented more clearly in the infrared image clearer and easier for the human eye to observe and recognize.
Regarding claim 2, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 1, wherein determining the target object of interest from the one or more target objects includes:
determining the target object of interest based on a selection operation on the target object (see Afrouzi, Column 282, lines 26-45: “the application may display the map of the environment as it is being built and updated. The application may also be used to define a path of the robot and zones and label areas. For example, FIG. 253A illustrates a map 6400 partially built on a screen of communication device 6401. FIG. 253B illustrates the completed map 6400 at a later time. In FIG. 253C, the user uses the application to define a path of the robot using path tool 6402 to draw path 6403. In some cases, the processor of the robot may adjust the path defined by the user based on observations of the environment or the use may adjust the path defined by the processor. In FIG. 253D, the user uses the application to define zones 6404 (e.g., boundary zones, vacuuming zones, mopping zones, etc.) using boundary tools 6405. In FIG. 253E, the user uses labelling tool 6406 to add labels such as bedroom, laundry, living room, and kitchen to the map 6400. In FIG. 253F, the kitchen and living room are shown”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 3, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 1, wherein performing true- color processing on the contour area based on the category of the target object of interest to highlight the target object of interest in the infrared image includes:
determining a corresponding true-color template based on the category of the target object of interest, and coloring the contour area of the target object of interest in the infrared image according to the corresponding true-color template for display (see Afrouzi, Column 107, lines 29-47: “the processor may localize the robot by localizing against the dominant color in each area. In some embodiments, the processor may use region labeling or region coloring to identify parts of an image that have a logical connection to each other or belong to a certain object/scene. In some embodiments, sensitivity may be adjusted to be more inclusive or more exclusive. In some embodiments, the processor may use a recursive method, an iterative depth-first method, an iterative breadth-first search method, or another method to find an unmarked pixel. In some embodiments, the processor may compare surrounding pixel values with the value of the respective unmarked pixel. If the pixel values fall within a threshold of the value of the unmarked pixel, the processor may mark all the pixels as belonging to the same category and may assign a label to all the pixels”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 4, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 3, wherein determining the contour area of the target object of interest includes:
segmenting the target object of interest based on its position information to obtain a contour segmentation image of the target object of interest (see Afrouzi, Column 107, lines 48-58: “a binary image includes either black or white regions. Pixels along the edge of a binary region (i.e., border) may be identified by morphological operations and difference images. Marking the pixels along the contour may have some useful applications, however, an ordered sequence of border pixel coordinates for describing the contour of a region may also be determined”);
wherein determining a corresponding true-color template based on the category of the target object of interest, and coloring the contour area of the target object of interest in the infrared image according to the corresponding true-color template for display comprises:
determining a corresponding true-color template based on the category of the target object of interest, coloring the contour segmentation image according to the corresponding true-color template (see Afrouzi, Column 107, line 64 – Column 108 line 10: “an image matrix may represent an image, wherein the value of each entry in the matrix may be the pixel intensity or color of a corresponding pixel within the image. In some embodiments, the processor may determine a length of a contour using chain codes and differential chain codes. In some embodiments, a chain code algorithm may begin by traversing a contour from a given starting point xs and may encode the relative position between adjacent contour points using a directional code for either 4-connected or 8-connected neighborhoods. In some embodiments, the processor may determine the length of the resulting path as the sum of the individual segments, which may be used as an approximation of the actual length of the contour”),
fusing the colored contour segmentation image with the original infrared image, and/or enlarging the colored contour segmentation image by a preset ratio and fusing it with the original infrared image at a specified display position (see Afrouzi, Column 106, line 62 – Column 107 line 11: “the processor may determine a transformation function for depth readings from a LIDAR, depth camera, or other depth sensing device. In some embodiments, the processor may determine a transformation function for various other types of data, such as images from a CCD camera, readings from an IMU, readings from a gyroscope, etc. The transformation function may demonstrate a current pose of the robot and a next pose of the robot in the next time slot. Various types of gathered data may be coupled in each time stamp and the processor may fuse them together using a transformation function that provides an initial pose and a next pose of the robot. In some embodiments, the processor may use minimum mean squared error to fuse newly collected data with the previously collected data. This may be done for transformations from previous readings collected by a single device or from fused readings or coupled data”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 5, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 1, wherein, after determining a target object of interest from the one or more target objects, the method further comprises:
performing target tracking on the target object of interest to obtain trajectory prediction information (see Afrouzi, Column 102, lines 50-55: “FIG. 105A illustrates three images 3000, 3001, and 3002 captured by an image sensor of the robot during navigation with same points 3003 in each image. Based on the intended trajectory of the robot, same points 3003 are expected to be positioned in locations”); and
displaying motion trajectory prompt information of the target object of interest in the infrared image based on the trajectory prediction information (see Afrouzi, Column 280, lines 25-31: “the map of the environment may be accessed through the application of a communication device and displayed on a screen of the communication device, e.g., on a touchscreen. In some embodiments, the processor of the robot may send the map of the environment to the application at various stages of completion of the map or after completion”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 6, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 5, wherein displaying the motion trajectory prompt information of the target object of interest in the infrared image based on the trajectory prediction information includes:
displaying motion direction prompt information on one side of the target object of interest in the infrared image based on the trajectory prediction information (see Afrouzi, Column 280, lines 25-31: “FIG. 237 illustrates a movement path 12200 of robot 12201. If robot 12201 is suddenly pushed towards the left direction indicated by arrow 12202, the portion 12203 of movement path 12200 may shift towards the left. To prevent this from occurring, the processor of robot 12201 may readjust based on the association between features observed and features of data included the global or local map”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 7, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 5, wherein displaying the motion trajectory prompt information of the target object of interest in the infrared image based on the trajectory prediction information includes:
determining a motion path of the target object of interest based on the trajectory prediction information and displaying the motion path in the infrared image (see Afrouzi, Column 203, lines 42-48: “the processor uses the aggregate map to determine a navigational path within the environment, which in some cases, may include a coverage path in various areas (e.g., areas including collections of adjacent unit tiles, like rooms in a multi-room work environment). Various navigation paths may be implemented based on the environmental characteristics of different locations within the aggregate map”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 8, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 5, wherein displaying the motion trajectory prompt information of the target object of interest in the infrared image based on the trajectory prediction information includes:
determining a next predicted position of the target object of interest in the infrared image based on the trajectory prediction information, and displaying a corresponding virtual image of the target object of interest at the next predicted position in the infrared image (see Afrouzi, Column 159, lines 38-43: “the processor of the robot may first build a global map of a first area (e.g., a bedroom) and cover that first area before moving to a next area to map and cover. In some embodiments, a user may use an application of a communication device paired with the robot to view a next zone for coverage or the path of the robot”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 9, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 5, wherein performing target tracking on the target object of interest to obtain trajectory prediction information includes:
performing feature extraction on a target region of the target object of interest to obtain a first feature extraction result (see Afrouzi, Column 186, lines 37-49: “the robot includes an image sensor (e.g., camera) to provide an input image and an object identification and data processing unit, which includes a feature extraction, feature selection and object classifier unit configured to identify a class to which the object belongs. In some embodiments, the identification of the object that is included in the image data input by the camera is based on provided data for identifying the object and the image training data set”);
setting target boxes with different step sizes and scales, searching from the target region as a starting position, respectively performing feature extraction on regions where the target boxes are located to obtain respective second feature extraction results (see Afrouzi, Column 100, lines 36-52: “The unique tags may be used to set and control the operation and execution of tasks within each subarea and to set the order of coverage of each subarea. For example, the robot may cover a particular subarea first and another particular subarea last. In some embodiments, the order of coverage of the subareas is such that repeat coverage within the total area is minimized. In another embodiment, the order of coverage of the subareas is such that coverage time of the total area is minimized. The order of subareas may be changed depending on the task or desired outcome. The example provided only illustrates two subareas for simplicity but may be expanded to include multiple subareas, spaces, or environments, etc. In some embodiments, the processor may represent subareas using a stack structure, for example, for backtracking purposes wherein the path of the robot back to its starting position may be found using the stack structure”);
based on feature similarity between the second feature extraction result of each target box and the first feature extraction result, determining the region where the target box whose feature similarity meets a requirement is located as a target tracking region of the target object of interest (see Afrouzi, Column 258, lines 6-11: “depending on the speed of transmission, the size of information sent, and the speed of robot, some compression may be safely employed in this way. For example, a Direct Linear Transformation Algorithm may be used to find a correspondence or similarity between two images or planes”); and
obtaining the trajectory prediction information based on the target tracking region (see Afrouzi, Column 185, lines 6-15: “The processor may identify the same object 4501 within the image based on identifying similar features as those identified in the image of FIG. 204B. FIG. 204D illustrates the movement 4502 of the object 4501. The processor may determine that the object 4501 is a person based on trajectory and/or the speed of movement of the object 4501 (e.g., by determining total movement of the object between the images captured in FIGS. 204B and 204C and the time between when the images in FIGS. 204B and 204C where taken)”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 10, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 9, wherein, after, based on feature similarity between the second feature extraction result of each target box and the first feature extraction result, determining the region where the target box whose feature similarity meets a requirement is located as a target tracking region of the target object of interest, the method further comprises:
determining a reference step size and a reference direction for a next trajectory prediction of the target object of interest based on relative position information between the target tracking region and the target region (see Afrouzi, Column 169, lines 29-40: “FIG. 198B illustrates a subsequent moment wherein the processor decides a next polymorphic rectangular coverage area 11303. The dotted line 11304 indicates a suggested L-shape path back to a central point of a first polymorphic rectangular coverage area 11301 and then to a central point of the next polymorphic rectangular coverage area 11303. Because of the polymorphic nature of these path planning methods, the path may be overridden by a better path, illustrated by the solid line 11305. The path defined by the solid line 11305 may override the path defined by the dotted line 11304. The act of overriding may be a characteristic that may be defined in the realm of polymorphism”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 11, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 9, wherein, setting target boxes with different step sizes and scales, searching from the target region as a starting position, respectively performing feature extraction on regions where the target boxes are located to obtain respective second feature extraction results, includes:
setting different scales by different ratios based on the size of the target object of interest, setting different step sizes according to the category of the target object of interest following a linear motion pattern, and respectively setting the target boxes in all directions around the target region where the target object of interest is located with the different step sizes and scales (see Afrouzi, Column 192, lines 39-50: “the processor may need to account for scale changes, rotation, and/or affine invariance for image matching and object recognition. To account for such factors, the processor may design descriptors that are rotationally invariant or estimate a dominant orientation at each detected key point. In some embodiments, the processor may detect false negatives (failure to match) and false positives (incorrect match). Instead of finding all corresponding feature points and comparing all features against all other features in each pair of potentially matching images, which is quadratic in the number of extracted features, the processor may use indexes”); and
searching within a specified image region from the target region as the starting position, and performing feature extraction on the regions where the target boxes are located to obtain the respective second feature extraction results (see Afrouzi, Column 192, lines 46-56: “Instead of finding all corresponding feature points and comparing all features against all other features in each pair of potentially matching images, which is quadratic in the number of extracted features, the processor may use indexes. In some embodiments, the processor may use multi-dimensional search trees or a hash table, vocabulary trees, K-Dimensional tree, and best bin first to help speed up the search for features near a given feature. In some embodiments, after finding some possible feasible matches, the processor may use geometric alignment and may verify which matches are inliers and which ones are outliers”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 12, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 1, wherein performing target detection on the infrared image to determine one or more target objects includes:
performing binarization processing on the infrared image to obtain a corresponding mask image (see Afrouzi, Column 107, lines 52-58: “a binary image includes either black or white regions. Pixels along the edge of a binary region (i.e., border) may be identified by morphological operations and difference images. Marking the pixels along the contour may have some useful applications, however, an ordered sequence of border pixel coordinates for describing the contour of a region may also be determined”);
performing image morphological filtering on the mask image to obtain a target object mask image corresponding to the infrared image (see Manivannan, paragraph [0080]: “the results may be submitted to Morphological operations (e.g., erosion, dilation, opening and closing) to further clean the segmentation before proceeding to the next method”);
fusing the target object mask image with the original infrared image to obtain a target screening image, and performing target detection on the target screening image to determine category and position information of the one or more target objects contained in the infrared image (see Afrouzi, Column 186, lines 50-67: “to recognize an object with high accuracy, feature amounts that characterize the recognition target object need to be configured in advance. Therefore, to prepare the object classification component of the data processing unit, different images of the desired objects are introduced to the data processing unit in a training set. After processing the images layer by layer, different characteristics and features of the objects in the training image set including edge characteristic combinations, basic shape characteristic combinations and the color characteristic combinations are determined by the deep learning algorithm(s) and the classifier component classifies the images by using those key feature combinations”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 13, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 12, wherein performing target detection on the target screening image to determine category and position information of the one or more target objects contained in the infrared image includes:
performing target detection on the target screening image using a trained detection model to determine the category and location information of the one or more target objects contained in the infrared image (see Afrouzi, Column 194, lines 46-52: “a classifier y=ƒ*(x) may map an image array x to a category y (e.g., cat, human, refrigerator, or other objects), wherein x∈{set of images} and y∈{set of objects}. In some embodiments, the processor may determine a mapping function y=ƒ(x; θ), wherein θ may be the value of parameters that return a best approximation”);
wherein the detection model is obtained by training a neural network model with a training sample set, the training sample set including training sample images containing different target objects and their respective categories and position labels (see Manivannan, paragraph [0056]: “SVM, is a machine learning, linear model for classification and regression problems, and may be used to solve linear and non-linear problems. The idea of an SVM is to create a line or hyperplane that separates data into classes. More formally, an SVM defines one or more hyperplanes in a multi-dimensional space, where the hyperplanes are used for classification, regression, outlier detection, etc. Essentially, an SVM model is a representation of labeled training examples as points in multi-dimensional space, mapped so that the labeled training examples of different categories are divided by hyperplanes, which may be thought of as decision boundaries separating the different categories. When a new test input sample is submitted to the SVM model, the test input is mapped into the same space and a prediction is made regarding what category it belongs to based on which side of a decision boundary (hyperplane) the test input lies”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 14, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared image processing method according to claim 12, wherein performing binarization processing on the infrared image to obtain the corresponding mask image includes:
setting pixels in the infrared image that are above a temperature threshold to a first grayscale value, and setting pixels that are below the temperature threshold to a second grayscale value, to obtain the corresponding mask image (see Afrouzi, Column 183, lines 2-7: “the processor may reduce the grayscale image to a binary bitmap. In some embodiments, the processor may extract a binary image by performing some form of thresholding to convert the grayscale image into an upper side of a threshold or a lower side of the threshold”);
performing image processing on the mask image by first performing erosion followed by dilation to obtain the target object mask image corresponding to the infrared image (see Manivannan, paragraph [0065]: “Morphological operations (e.g., erosion, dilation, opening and closing) are applied to the output of Chan-Vese active contour to refine the contour boundaries and to remove small isolated regions”).
The motivation for combining the references has been discussed in claim 1 above.
Claim 15 is rejected for the same reasons as discussed in claim 1 above. In addition, the combination teachings of Afrouzi and Manivannan as discussed above also disclose a processor, a memory coupled to the processor, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor (see Afrouzi, Abstract, “A robot for perceiving a spatial representation of an environment, including: an actuator, at least one sensor, a processor, and memory storing instructions that when executed by the processor”).
Regarding claim 16, the combination teachings of Afrouzi and Manivannan as discussed above also disclose the infrared thermal imaging device according to claim 15, further comprising an infrared photographing module and a display module connected to the processor; wherein the infrared photographing module is configured to capture an infrared image and send it to the processor (see Afrouzi, Column 53, lines 33 – 63: “information sensed by a depth perceiving sensor may be processed and translated into depth measurements, which, in some embodiments, may be reported in a standardized measurement unit, such as millimeter or inches, for visualization purposes … a one or more IR (or with other portions of the spectrum) illuminators (such as those mounted on a robot) may project light onto objects (e.g., with a spatial structured pattern (like with structured light), or by scanning a point-source of light), and the resulting projection may be sensed with one or more cameras (such as robot-mounted cameras offset from the projector in a horizontal direction). In resulting images from the one or more cameras, the position of pixels with high intensity may be used to infer depth (e.g., based on parallax, based on distortion of a projected pattern, or both in captured images)”);
the display module is configured to display the infrared image output by the processor, with the target object of interest being highlighted in the infrared image (see Afrouzi, Column 81, lines 62–55: “the processor may manipulate the map by cleaning up the map for navigation purposes or aesthetics purposes (e.g., displaying the map to a user)”).
The motivation for combining the references has been discussed in claim 1 above.
Claim 17 is rejected for the same reasons as discussed in claim 3 above.
Claim 18 is rejected for the same reasons as discussed in claim 4 above.
Claim 19 is rejected for the same reasons as discussed in claim 5 above.
Claim 20 is rejected for the same reasons as discussed in claim 1 above. In addition, the combination teachings of Afrouzi and Manivannan as discussed above also disclose a computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program thereon, wherein the computer program, when executed by a processor, implements the infrared image processing method according to claim 1 (see Afrouzi, Column 333, line 62 – Column 334 line 3: “The methods and techniques described herein may be implemented as a process, as a method, in an apparatus, in a system, in a device, in a computer readable medium (e.g., a computer readable medium storing computer readable instructions or computer program code that may be executed by a processor to effectuate robotic operations), or in a computer program product including a computer usable medium with computer readable program code embedded therein”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIENRU YANG whose telephone number is (571)272-4212. The examiner can normally be reached Monday-Friday 10AM-6PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NIENRU YANG
Examiner
Art Unit 2484
/NIENRU YANG/Examiner, Art Unit 2484
/THAI Q TRAN/Supervisory Patent Examiner, Art Unit 2484