Prosecution Insights
Last updated: April 19, 2026
Application No. 17/953,041

OBJECT DETECTING METHOD AND SYSTEM USING DISTANCE IMAGE

Non-Final OA §101§103
Filed
Sep 26, 2022
Examiner
GOEBEL, EMMA ROSE
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Nuvoton Technology Corporation Japan
OA Round
3 (Non-Final)
53%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
24 granted / 45 resolved
-8.7% vs TC avg
Strong +47% interview lift
Without
With
+47.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
40 currently pending
Career history
85
Total Applications
across all art units

Statute-Specific Performance

§101
18.2%
-21.8% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 45 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgement is made of Applicant’s claim of priority from Foreign Application No. JP2020-064779, filed March 31, 2020 and continuation of International Application No. PCT/JP2021/013000, filed March 26, 2021. Status of Claims Claims 1-19 are pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on July 2, 2025 has been entered. Response to Arguments Applicant’s arguments, see p. 6, filed July 2, 2025, with respect to the Claim Objections have been fully considered and are persuasive. The amendment of claim 1 has overcome the previous objection and it therefore has been withdrawn. Applicant's arguments filed July 2, 2025 with respect to the 35 USC 101 rejection have been fully considered but they are not persuasive. Applicant argues that the abstract idea rejection is overcome because the step of “converting each of the series of distance images to a point cloud image having an X, Y, and Z orthogonal coordinate system” recites significantly more than the abstract idea. Examiner respectfully disagrees. As recited in MPEP 2106.05(g), insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional. In this case, Examiner asserts that the extra-solution activity of “obtaining a series of distance images of a monitoring region, each of the series of distance images being captured using an infrared light and comprising a matrix of data each representing a distance to targets” and “converting each of the series of distance images to a point cloud image having an X, Y, and Z orthogonal coordinate system” is insignificant because it is mere data gathering for use in the target detection process. Thus, the 35 USC 101 rejection of the claims is upheld. Applicant's arguments filed July 2, 2025 with respect to the 35 USC 101 rejection have been fully considered but they are moot because of the new grounds of rejection, presented in the 35 USC 103 section below. Applicant argues that none of the cited references disclose or suggest “tracking the target by comparing results of detection of the target between frames regarding the series of distance images”. However, the newly presented Hiekata reference teaches determining a temporal change between a candidate pixel in a distance image of a current frame and a corresponding candidate pixel in a distance image of a past frame (see Paragraph [0032]). Therefore, the 35 USC 103 rejection of the claims is upheld. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a system, method, and non-transitory computer-readable medium for determining a stay decision of a target based on the positional change of the target. Consider method claim 1: Step 1: With regard to Step 1, the instant claim is directed to a method or a process; and therefore, the claim is directed to one of the statutory categories of invention. Step 2A, Prong One: With regard to 2A, Prong One, the limitations “detecting a target based on the point cloud image”, “tracking the target by comparing results of a detection of the target between frames regarding the series of distance images” and “making a stay decision, the stay decision including determining whether any stay of the target has occurred, based on an index indicating a positional change of the target with passage of time” as drafted, recite an abstract idea, such as a process that, under its broadest reasonable interpretation, covers performance of the limitations manually and in the mind of a person. That is, a user or person skilled in the art may examine a series of images and mentally determine if a person in the images has stayed in the same spot based on their position in the images. This is the concept that falls under the grouping of abstract ideas mental processes, i.e., a concept performed in the human mind, evaluation, judgement, and/or opinion of the user. Step 2A, Prong Two: The 2019 PEG defines the phrase “integration into a practical application” to require an additional step or a combination of additional steps in the claim to apply, rely on, or use the judicial exception. In the instant case, the steps of “obtaining a series of distance images of a monitoring region, each of the series of distance images being captured using an infrared light and comprising a matrix of data each representing a distance to targets” and “converting each of the series of distance images to a point cloud image having an X, Y, and Z orthogonal coordinate system” is considered to be extra-solution activity of gathering and outputting information. In addition, with respect to the system and computer-readable medium claims of claims 18 and 19, the mere recitation of a generic processing system or storage medium to perform/store programming instructions of the recited/identified abstract idea does not integrate the identified abstract idea into a practical application. Accordingly, the above-mentioned additional elements/limitations do not integrate the abstract idea into a practical application; and therefore, the independent claims recite an abstract idea. Step 2B: Because the claims fail under Step 2A, the claims are further evaluated under Step 2B. The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception, because as discussed above with respect to integration of the abstract idea into practical application, the additional elements/limitations to perform the recited steps, amount to no more than insignificant extra-solution activity. Mere instructions to apply an exception using a generic component cannot provide an inventive concept. Therefore, independent claims 1 and 19 are not patent eligible. In addition, claims 2-18 of the instant application provide limitations that both individually or in combination do not integrate the identified abstract idea into a practical application or provide significantly more than the identified abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 16, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sugahara et al. (US 2019/0087976 A1) in view of Ryoma Oami (US 2020/0134323 A1) further in view of Takashi Hiekata (US 2019/0066270 A1). Regarding claim 1, Sugahara teaches an information processing method comprising: obtaining a series of distance images of a monitoring region (Sugahara, Para. [0043], the three-dimensional imaging device is a device which generates the distance images of the objects by taking images from multiple angles), each of the series of distance images being captured using an infrared light and comprising a matrix of data each representing a distance to targets (Sugahara, Para. [0060], an imaging unit which is a camera which has a three-dimensional image sensor (distance image sensor) obtaining the depth information (distance image) including the distance between the sensor and the subject plus the shape of the subject. An example of the three-dimensional image sensor includes a ToF sensor, which radiates electromagnetic waves such as infrared rays or visible light to the subject); converting each of the series of distance images to a point cloud image having an X, Y, and Z orthogonal coordinate system (Sugahara, Para. [0061], one example of the format of the distance image is the 3D point cloud data which is a set of points in three-dimensional orthogonal coordinates). Although Sugahara teaches determining a series of distance images and converting them to a point cloud (Sugahara, Para. [0060]-[0061]), Sugahara does not explicitly teach “detecting a target based on the point cloud image” and “making a stay decision, the stay decision including determining whether any stay of the target has occurred, based on an index indicating a positional change of the target with passage of time”. However, in an analogous field of endeavor, Oami teaches the generation unit detects the object queue from the video frame and generates the tracking information indicating the position of each tracking target object using each object included in the target queue as the tracking object (Oami, Para. [0043]). Oami further teaches the estimated position computation unit determines whether or not the tracking target object is included in the standstill region (i.e., determining whether any stay of the target has occurred) (Oami, Para. [0112]). The information processing apparatus estimates the behavior of the object queue using the tracking information at the first time point and generates queue behavior information. The queue behavior information indicates the behavior of the object queue. Furthermore, the information processing apparatus estimates the position of each tracking target object at the second time point based on the tracking information at the first time point and the queue behavior information at the first time point (queue behavior information generated for the object queue at the first time point). The information processing apparatus updates the tracking information based on the estimated position of each tracking target object at the second time point and the position of each object detected from the video frame at the second time point (video frame representing the capturing result of the camera at the second time point) (Oami, Para. [0037]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sugahara with the teachings of Oami by including detecting a target based on the point cloud image of Sugahara and determining whether any stay of the target has occurred (i.e., if the target is in a standstill region) based on a positional change of the object from the first time point to the second time point. One having ordinary skill in the art before the effective filing date would have been motivated to combine these references because doing so would allow for accurately tracking an object in a queue of objects, as recognized by Oami. Although Sugahara in view of Oami teaches estimating the position of each tracking target object at the second time point based on the tracking information at the first time point (Oami, Para. [0037]), they do not explicitly teach “tracking the target by comparing results of detection of the target between frames regarding the series of distance images”. However, in an analogous field of endeavor, Hiekata teaches calculating a feature value indicating the characteristic of the temporal change between each candidate pixel in a distance image of a current frame and a corresponding candidate pixel in a distance image of a past frame (Hiekata, Para. [0032]). The same single disturbance object can be tracked until the disturbance object becomes out-of-frame by associating candidate pixels with one another over a plurality of frames (Hiekata, Para. [0054]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sugahara in view of Oami with the teachings of Hiekata by including tracking the target objects by comparing results of detection of the target (i.e., candidate pixel) between a distance image of a current frame and a distance image of a past frame. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for recognizing a target object in a distance image, as recognized by Hiekata. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Regarding claim 2, Sugahara in view of Oami further in view of Hiekata teaches the information processing method of claim 1, and further teaches wherein the index includes a velocity of the target (Oami, Para. [0035], the behavior of the object is represented by the state, the motion, and the like of the object. For example, the state of the object is a state where the object stands still or a state where the object is moving. For example, the motion of the object is represented by a direction and a speed at which the object is moving). The proposed combination as well as the motivation for combining the Sugahara, Oami and Hiekata references presented in the rejection of Claim 1, apply to Claim 2 and are incorporated herein by reference. Thus, the method recited in Claim 2 is met by Sugahara in view of Oami further in view of Hiekata. Regarding claim 3, Sugahara in view of Oami further in view of Hiekata teaches the information processing method of claim 1, and further teaches wherein identification information is assigned to the target (Oami, Para. [0079], the table shows a tracking ID, a position, a state, a motion, a feature value, and a region. The tracking ID is an identifier assigned to the tracking object). The proposed combination as well as the motivation for combining the Sugahara, Oami and Hiekata references presented in the rejection of Claim 1, apply to Claim 3 and are incorporated herein by reference. Thus, the method recited in Claim 3 is met by Sugahara in view of Oami further in view of Hiekata. Regarding claim 4, Sugahara in view of Oami further in view of Hiekata teaches the information processing method of claim 3, and further teaches wherein the stay decision is made with respect to the target to which the identification information is assigned (Oami, Para. [0079], the table shows a tracking ID, a position, a state, a motion, a feature value, and a region. The tracking ID is an identifier assigned to the tracking object. Para. [0088], the information determining the movement region indicates the position of the movement region, information (identifier or the like assigned to the object) determining each object included in the movement region, or the like). The proposed combination as well as the motivation for combining the Sugahara, Oami and Hiekata references presented in the rejection of Claim 1, apply to Claim 4 and are incorporated herein by reference. Thus, the method recited in Claim 4 is met by Sugahara in view of Oami further in view of Hiekata. Regarding claim 16, Sugahara in view of Oami further in view of Hiekata teaches the information processing method of claim 1, and teaches the method further comprising a correlation decision step including determining a correlation between a plurality of the targets (Oami, Para. [0087], in the partial movement state, a part of the objects moves in the traveling direction of the object queue and the other objects stand still. Thus, the region of the object queue is divided into a movement region and a standstill region, i.e., moving targets are correlated and standstill objects are correlated). The proposed combination as well as the motivation for combining the Sugahara, Oami and Hiekata references presented in the rejection of Claim 1, apply to Claim 16 and are incorporated herein by reference. Thus, the method recited in Claim 16 is met by Sugahara in view of Oami further in view of Hiekata. Claim 18 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 1. Therefore, the recited programming instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Sugahara, Oami and Hiekata references, presented in rejection of Claim 1, apply to this claim. Finally, the Sugahara, Oami and Hiekata references disclose a computer readable storage medium (Sugahara, claim 15). Claim 19 recites an information processing system with elements corresponding to the steps recited in Claim 1. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Sugahara, Oami and Hiekata references, presented in rejection of Claim 1, apply to this claim. Finally, the Sugahara, Oami and Hiekata references disclose a light-emitting device for emitting an infrared light (Sugahara, Para. [0060], ToF sensors radiate electromagnetic waves such as infrared rays or visible light to the subject), an image sensor (Sugahara, Para. [0060], ToF sensor), and a first and second processor (Sugahara, Para. [0138], the components could be implemented with processing circuits. Processors such as the CPU are examples of the processing circuit). Claims 5-6, 8-11, and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Sugahara et al. (US 2019/0087976 A1) in view of Ryoma Oami (US 2020/0134323 A1) further in view of Takashi Hiekata (US 2019/0066270 A1), as applied to claims 1-4, 16, and 18-19 above, and further in view of Kazuhiko Iwai (US 2015/0010204 A1). Regarding claim 5, Sugahara in view of Oami further in view of Hiekata teaches the information processing method of claim 1, as described above. Although Sugahara in view of Oami further in view of Hiekata teaches determining if a target object is in a standstill region (Oami, Para. [0112]), they do not explicitly teach “the stay decision includes a decision about a degree of the stay of the target”. However, in an analogous field of endeavor, Iwai teaches the staying time indicating the time for which persons stayed around each display shelf is generated by the person behavior analysis unit and displayed. A user can know the state of occurrence of item pick-up actions performed by customers at each display shelf and the state of staying of customers around each display shelf (Iwai, Para. [0074]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sugahara in view of Oami further in view of Hiekata with the teachings of Iwai by including a staying time indicating the time for which persons stayed that describes the state of staying of customers. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for accurate analysis of a person’s behavior based on staying time, as recognized by Iwai. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Regarding claim 6, Sugahara in view of Oami further in view of Hiekata teaches the information processing method of claim 1, as described above. Although Sugahara in view of Oami further in view of Hiekata teaches determining if a target object is in a standstill region (Oami, Para. [0112]), they do not explicitly teach “confirming, based on a result of the stay decision, that the stay of the target has occurred to generate stay occurrence information about a stay occurrence period including a time of occurrence of the stay of the target, wherein the stay occurrence period includes at least a period preceding the time of occurrence of the stay of the target”. However, in an analogous field of endeavor, Iwai teaches a staying time in the vicinity area is measured, and a person whose staying time does not reach a predetermined threshold value can be determined to be a person who could not perform an item pick-up action. Persons to be included in the determination performed by the item pick-up action determination unit can be narrowed down. The staying time measurement can be based on the entry time into the vicinity area and the exit time from the vicinity area (Iwai, Para. [0114]). Iwai further teaches the area determination unit performs a process of determining whether each person detected by the person detection unit entered the surrounding area as well as whether the person entered the vicinity area based on the position of the head center of the person (i.e., there is a period of time preceding the person staying in the vicinity area). (Iwai, Para. [0103]). The proposed combination as well as the motivation for combining the Sugahara, Oami, Hiekata and Iwai references presented in the rejection of Claim 5, apply to Claim 6 and are incorporated herein by reference. Thus, the system recited in Claim 6 is met by Sugahara in view of Oami further in view of Hiekata and Iwai. Regarding claim 8 Sugahara in view of Oami further in view of Hiekata teaches the information processing method of claim 1, as described above. Although Sugahara in view of Oami further in view of Hiekata teaches determining if a target object is in a standstill region (Oami, Para. [0112]), they do not explicitly teach “confirming, based on a result of the stay decision, that the stay of the target has occurred to present stay occurrence information about the occurrence of the stay of the target”. However, in an analogous field of endeavor, Iwai teaches a staying time in the vicinity area is measured, and a person whose staying time does not reach a predetermined threshold value can be determined to be a person who could not perform an item pick-up action. Persons to be included in the determination performed by the item pick-up action determination unit can be narrowed down. The staying time measurement can be based on the entry time into the vicinity area and the exit time from the vicinity area (Iwai, Para. [0114]). The proposed combination as well as the motivation for combining the Sugahara, Oami, Hiekata and Iwai references presented in the rejection of Claim 5, apply to Claim 8 and are incorporated herein by reference. Thus, the system recited in Claim 8 is met by Sugahara in view of Oami further in view of Hiekata and Iwai. Regarding claim 9, Sugahara in view of Oami further in view of Hiekata and Iwai teaches the information processing method of claim 8, and further teaches wherein the stay occurrence information includes an image concerning the stay of the target (Iwai, Para. [0076], the analysis result screen includes a display image in which the result of the totaling for each display shelf is shown superimposed on a corresponding one of the images. The result of totaling displayed in this analysis result screen indicates the surrounding area staying time and the number of item pick-up actions detecting for each display shelf). The proposed combination as well as the motivation for combining the Sugahara, Oami, Hiekata and Iwai references presented in the rejection of Claim 5, apply to Claim 9 and are incorporated herein by reference. Thus, the system recited in Claim 9 is met by Sugahara in view of Oami further in view of Hiekata and Iwai. Regarding claim 10, Sugahara in view of Oami further in view of Hiekata and Iwai teaches the information processing method of claim 6, and further teaches wherein the stay decision includes determining whether the stay of the target has ended (Iwai, Para. [0114], The staying time measurement can be based on the entry time into the vicinity area and the exit time from the vicinity area). The proposed combination as well as the motivation for combining the Sugahara, Oami, Hiekata and Iwai references presented in the rejection of Claim 5, apply to Claim 10 and are incorporated herein by reference. Thus, the system recited in Claim 10 is met by Sugahara in view of Oami further in view of Hiekata and Iwai. Regarding claim 11, Sugahara in view of Oami further in view of Hiekata and Iwai teaches the information processing method of claim 8, and teaches the method further comprising including confirming, based on a result of the stay decision, that the stay of the target has ended to generate end-of-stay information about an end-of-stay period including an ending time of the stay of the target, wherein the end-of-stay period includes at least a period preceding the ending time of the stay of the target (Iwai, Para. [0107], Iwai teaches the surrounding area staying time and the vicinity area staying time are calculated from the entry times into the surrounding area and the vicinity area and the exit times from the surrounding area and the vicinity area. The area entry determination unit 46 detects entry of a person into the surrounding area and the vicinity area as well as leaving of the person from the surrounding area and the vicinity area. It is to be noted that the person information storage unit 44 stores the position of the head center of each person detected in each frame (image) in association with the time of detection, and thus, based on the times of detection stored in the person information storage unit 44, it is possible to obtain the times when the a person's entry into and leaving from the above areas were detected by the area entry determination unit 46 (entry time and exit time)). The proposed combination as well as the motivation for combining the Sugahara, Oami, Hiekata and Iwai references presented in the rejection of Claim 5, apply to Claim 11 and are incorporated herein by reference. Thus, the system recited in Claim 11 is met by Sugahara in view of Oami further in view of Hiekata and Iwai. Regarding claim 13, Sugahara in view of Oami further in view of Hiekata and Iwai teaches the information processing method of claim 10, and teaches the method further comprising confirming, based on a result of the stay decision, that the stay of the target has ended to present end-of-stay information about the end of the stay of the target (Iwai, Para. [0107], Iwai teaches the surrounding area staying time and the vicinity area staying time are calculated from the entry times into the surrounding area and the vicinity area and the exit times from the surrounding area and the vicinity area. The area entry determination unit 46 detects entry of a person into the surrounding area and the vicinity area as well as leaving of the person from the surrounding area and the vicinity area. It is to be noted that the person information storage unit 44 stores the position of the head center of each person detected in each frame (image) in association with the time of detection, and thus, based on the times of detection stored in the person information storage unit 44, it is possible to obtain the times when the a person's entry into and leaving from the above areas were detected by the area entry determination unit 46 (entry time and exit time)). The proposed combination as well as the motivation for combining the Sugahara, Oami, Hiekata and Iwai references presented in the rejection of Claim 5, apply to Claim 13 and are incorporated herein by reference. Thus, the system recited in Claim 13 is met by Sugahara in view of Oami further in view of Hiekata and Iwai. Regarding claim 14, Sugahara in view of Oami further in view of Hiekata and Iwai teaches the information processing method of claim 13, wherein the end-of-stay information includes information in a different mode from the stay occurrence information (Iwai, Para. [0082], the person detection unit performs a process of detecting a person from image information (moving picture constituted of multiple frames (captured images)) obtained by capturing images covering an area around display shelves S. Para. [0076], the analysis result screen includes a display image in which the result of the totaling for each display shelf is shown superimposed on a corresponding one of the images. The result of totaling displayed in this analysis result screen indicates the surrounding area staying time and the number of item pick-up actions detecting for each display shelf). The proposed combination as well as the motivation for combining the Sugahara, Oami, Hiekata and Iwai references presented in the rejection of Claim 5, apply to Claim 14 and are incorporated herein by reference. Thus, the system recited in Claim 14 is met by Sugahara in view of Oami further in view of Hiekata and Iwai. Regarding claim 15, Sugahara in view of Oami further in view of Hiekata teaches the information processing method of claim 1, as described above. Although Sugahara in view of Oami further in view of Hiekata teaches determining if a target object is in a standstill region (Oami, Para. [0112]), they do not explicitly teach “making a decision about entry of the target into an area of interest within the monitoring region”. However, in an analogous field of endeavor, Iwai teaches the surrounding area staying time and the vicinity area staying time are calculated from the entry times into the surrounding area and the vicinity area and the exit times from the surrounding area and the vicinity area. The area entry determination unit 46 detects entry of a person into the surrounding area and the vicinity area as well as leaving of the person from the surrounding area and the vicinity area (Iwai, Para. [0107]). The proposed combination as well as the motivation for combining the Sugahara, Oami, Hiekata and Iwai references presented in the rejection of Claim 5, apply to Claim 15 and are incorporated herein by reference. Thus, the system recited in Claim 15 is met by Sugahara in view of Oami further in view of Hiekata and Iwai. Claims 7 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Sugahara et al. (US 2019/0087976 A1) in view of Ryoma Oami (US 2020/0134323 A1) further in view of Takashi Hiekata (US 2019/0066270 A1) and Iwai (US 2015/0010204 A1) as applied to claims 5-6, 8-11, and 13-15 above, and further in view of Itoh (US 2020/0092454 A1). Regarding claim 7, Sugahara in view of Oami further in view of Hiekata and Iwai teaches the information processing method of claim 6, and further teaches wherein the stay occurrence information includes a stay occurrence moving picture (Iwai, Para. [0082], the person detection unit performs a process of detecting a person from image information (moving picture constituted of multiple frames (captured images)) obtained by capturing images covering an area around display shelves S). The proposed combination as well as the motivation for combining the Sugahara, Oami, Hiekata and Iwai references presented in the rejection of Claim 5, apply to Claim 7 and are incorporated herein by reference. Although Sughara in view of Oami further in view of Hiekata and Iwai teaches a staying time measurement based on the entry time into the vicinity area and the exit time from the vicinity area (Iwai, Para. [0114]) and a moving picture of multiple frames capturing the stay occurrence (Iwai, Para. [0082]), they do not explicitly teach “the stay occurrence moving picture is a moving picture constituted of a group of luminance images, included in the stay occurrence period, out of a group of time-series luminance images of the monitoring region”. However, in an analogous field of endeavor, Itoh teaches a moving object map obtaining unit that obtains pieces of image data captured in time series from the image capturing unit and averages the changes in luminance between the pieces of captured image data in units of a plurality of pixels to thereby generate motion information on objects on a pixel-by-pixel basis (Itoh, Para. [0033]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sugahara in view of Oami further in view of Hiekata and Iwai with the teachings of Itoh by including a moving object map obtaining unit that obtains image data based on luminance values in images captured in time-series. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for generation of an image of a scene with a wide dynamic range, as recognized by Itoh. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Regarding claim 12, Sugahara in view of Oami further in view of Hiekata and Iwai teaches the information processing method of claim 11, as described above. Although Sugahara in view of Oami further in view of Hiekata and Iwai teaches detecting the exit time of a person from a vicinity area (Iwai, Para. [0107]), they do not explicitly teach “the end-of-stay information includes an end-of-stay moving picture, and the end-of-stay moving picture is a moving picture constituted of a group of luminance images, included in the end-of-stay period, out of a group of time-series luminance images of the monitoring region”. However, in an analogous field of endeavor, Itoh teaches a moving object map obtaining unit that obtains pieces of image data captured in time series from the image capturing unit and averages the changes in luminance between the pieces of captured image data in units of a plurality of pixels to thereby generate motion information on objects on a pixel-by-pixel basis (Itoh, Para. [0033]). The proposed combination as well as the motivation for combining the Sugahara, Oami, Hiekata, Iwai, and Itoh references presented in the rejection of Claim 7, apply to Claim 12 and are incorporated herein by reference. Thus, the system recited in Claim 12 is met by Sugahara in view of Oami further in view of Hiekata, Iwai and Itoh. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Sugahara et al. (US 2019/0087976 A1) in view of Ryoma Oami (US 2020/0134323 A1) further in view of Takashi Hiekata (US 2019/0066270 A1) and Iwai (US 2015/0010204 A1), as applied to claims 1-4, 16, and 18-19 above, and further in view of Abe et al. (US 2019/0277947 A1). Regarding claim 17, Sugahara in view of Oami further in view of Hiekata teaches the information processing method of claim 1, as described above. Although Sugahara in view of Oami further in view of Hiekata teaches determining if a target object is in a standstill region (Oami, Para. [0112]), they do not explicitly teach “estimating a posture of the target based on a center of gravity height of the target”. However, in an analogous field of endeavor, Abe teaches a posture determiner that assesses, with reference to the center of gravity thus derived of the duster of point groups in the area around the head, whether the bather has had his/her head further down or further bent after sitting down (Abe, Para. [0079]). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Sugahara in view of Oami further in view of Hiekata with the teachings of Abe by including determining a posture based on the derived center of gravity. One having ordinary skill in the art would have been motivated to combine these references, because doing so would provide an efficient tracking apparatus and method, as recognized by Abe. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emma Rose Goebel whose telephone number is (703)756-5582. The examiner can normally be reached Monday - Friday 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Emma Rose Goebel/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Sep 26, 2022
Application Filed
Dec 17, 2024
Non-Final Rejection — §101, §103
Mar 13, 2025
Response Filed
Apr 09, 2025
Final Rejection — §101, §103
Jul 02, 2025
Request for Continued Examination
Jul 07, 2025
Response after Non-Final Action
Oct 06, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597236
FINE-TUNING JOINT TEXT-IMAGE ENCODERS USING REPROGRAMMING
2y 5m to grant Granted Apr 07, 2026
Patent 12597129
METHOD FOR ANALYZING IMMUNOHISTOCHEMISTRY IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12597093
UNDERWATER IMAGE ENHANCEMENT METHOD AND IMAGE PROCESSING SYSTEM USING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12597124
DEBRIS DETERMINATION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12588885
FAT MASS DERIVATION DEVICE, FAT MASS DERIVATION METHOD, AND FAT MASS DERIVATION PROGRAM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
53%
Grant Probability
99%
With Interview (+47.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 45 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month