DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim interpretation regarding to claim1 is withdrawn.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f): (FP 7.30.03)
(f) ELEMENT IN CLAIM FOR A COMBINATION-An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35
112, sixth paragraph, is invoked.
As explained in MPEP 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as "configured to" or "so that"; and
the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. (FP 7.30.05)
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a sensing unit, a control unit, a sensor output unit, a test signal reception unit, a test signal response unit, a learning mode transition unput unit, a test signal determination unit, a test signal estimation unit, a test signal estimation unit in claim 9.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. (FP 7.30.06)
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim 9 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by
Wu et al. (Wu) US 2022/0026530.
In regard to claim 9, Wu disclose An object detection sensor configured to detect that a target object acts on to-be-detected rays between a plurality of elements or to detect to-be- detected rays from a target object to output a predetermined detection signal, the object detection sensor configured to output the predetermined detection signal toward an external object detection device existing outside the object detection sensor to detect the target object, the object detection sensor comprising: (Fig. 1, [0055]-[0082][0137] [0147][0159]-[0162] [0241]-[0246] remote sensor device detect user events and receive radar signal of a person, and output the raw signal to gait cube extraction module of the human recognition system which impacted by the person walking in a venue and recognize the identity of the person, remote sensor device is external to 120/130)
a sensing unit configured to detect that the target object acts on to-be- detected rays between a plurality of elements or to detect to-be-detected rays from the target object to output a sensing signal, (Fig. 1, [0055]-[0082][0137] [0147][0159]-[0162] [0241]-[0246] remote sensor device detect user events and receive radar signal of a person, and output the raw signal to gait cube extraction module of the human recognition system which impacted by the person walking in a venue)
a control unit configured to command output of the predetermined detection signal in response to input of the sensing signal, (Fig. 1, [0055]-[0082][0137] [0147][0159]-[0162] [0241]-[0246] processor of type1/typ2 device command output the raw signal in response to receive the radar signal 110)
a sensor output unit configured to output the predetermined detection signal in response to the command, (Fig. 1, [0055]-[0082][0137] [0147][0159]-[0162] [0241]-[0246] sensor output the raw signal in response to the command)
a test signal reception unit configured to receive a test signal from the external object detection device, (Fig.1, 2, [0055]-[0082] [0105]-[0129] ]0159]-[0162][0187]-[0189] [0233] [0241]-[0246] the processor receive a test (training) signal from the human recognition system 100 based on the communication protocol) and
a test signal response unit configured to output a test response signal to the external object detection device in response to reception of the test signal, the object detection sensor further comprising: (Fig.1, 2, [0055]-[0082] [0105]-[0129] ]0159]-[0162][0187]-[0189] [0233] [0241]-[0246] the processor respond with a response or ac ACK to the first handshake signal based on the communication protocol)
a learning mode transition input unit configured to start a learning mode, ([0168] [0186]-[0187] the training and testing (operation) can be started based on the threshold, condition, event, etc. according to criteria)
a test signal determination unit configured to determine a state of the test signal received by the test signal reception unit in the learning mode, ([0096] [0110]-[0133] [0136][0146][0148] [0175] [0276]-[0277] determine a state of the test signal, such as steady state, characteristics/states, etc. Note: please further define, there are many possibilities.)
a test signal estimation unit configured to estimate a characteristic of the test signal from the determined state of the test signal in the learning mode, ([0096] [0110]-[0133] [0136][0146][0148] [0157] [0197]-[0202] estimate a characteristic (statistical distribution, etc. ) of the signal from the determined state in the learning mode) and
a test signal verification unit configured to verify whether the estimated characteristic of the test signal is included in the test signal received by the test signal reception unit in the learning mode. ([0096] [0110]-[0133] [0136][0146][0148] [0157] [0173]-[0178] [0197]-[0202] [0244] [0306]-[0337] verify the validity of the estimation of the characteristic (such as a speed, the speed based on the trace, etc. ) from the signal received in the learning mode)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (Wu) US 2022/0026530 in view of Dahlgren et al. (Dahlgren) US 2022/0294971
In regard to claim 1, Wu disclose An object detection sensor configured to detect that a target object acts on to-be-detected rays between a plurality of elements or to detect to-be- detected rays from a target object to output a predetermined detection signal, the object detection sensor configured to output the predetermined detection signal toward an external object detection device existing outside the object detection sensor to detect the target object, the object detection sensor comprising: (Fig. 1, [0055]-[0082][0137] [0147][0159]-[0162] [0241]-[0246] remote sensor device detect user events and receive radar signal of a person, and output the raw signal to gait cube extraction module of the human recognition system which impacted by the person walking in a venue and recognize the identity of the person, remote sensor device is external to 120/130)
a sensor configured to detect that the target object acts on to-be- detected rays between a plurality of elements or to detect to-be-detected rays from the target object to output a sensing signal, and (Fig. 1, [0055]-[0082][0137] [0147][0159]-[0162] [0241]-[0246] remote sensor device detect user events and receive radar signal of a person, and output the raw signal to gait cube extraction module of the human recognition system which impacted by the person walking in a venue)
a processor, the processor configured to: (Fig. 1, [0055]-[0082][0137] [0147][0159]-[0162] [0241]-[0246] processor of type1/typ2 device)
output the predetermined detection signal in response to input of the sensing signal, (Fig. 1, [0055]-[0082][0137] [0147][0159]-[0162] [0241]-[0246] processor of type1/typ2 device command output the raw signal in response to receive the radar signal 110)
and the processor further configured to:
start a learning mode, ([0168] [0186]-[0187] The training or learning can be started based on a threshold to events, conditions, states, etc. which according to some criteria)
determine a state of the test signal in the learning mode, ([0096] [0110]-[0133] [0136][0146][0148] [0175] [0276]-[0277] determine a state of the test signal, such as steady state, characteristics/states, etc. Note: please further define, there are many possibilities.)
estimate a characteristic of the test signal from the state determined of the test signal in the learning mode, ([0096] [0110]-[0133] [0136][0146][0148] [0157] [0197]-[0202] [0244] estimate a characteristic (statistical distribution, etc. ) of the signal from the determined state in the learning mode) and
verify whether the characteristic estimated of the test signal is included in the test signal received in the learning mode. ([0096] [0110]-[0133] [0136][0146][0148] [0157] [0173]-[0178] [0197]-[0202] [0244] [0306]-[0337] verify the validity of the estimation of the characteristic (such as a speed, the speed based on the trace, etc. ) from the signal received in the learning mode)
But Wu fail to explicitly disclose “receive a test signal from the external object detection device toward the object detection sensor, and output a test response signal to the external object detection device in response to reception of the test signal,”
Dahlgren disclose receive a test signal from the external object detection device toward the object detection sensor, and output a test response signal to the external object detection device in response to reception of the test signal, (Fig. 2, [0036]-[0038] [0045]-[0049] the remote device 210 may require the near sensor object detector 204 for examining a specific part of the vido, etc. detect moving objects and the near sensor object detector 204 update the remote device of the detected moving objects in response to the request. Note: please further define the test signal to help move forward the prosecution)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate Dahlgren‘s collaborative object detection into Wu’s invention as they are related to the same field endeavor of human recognition. The motivation to combine these arts, as proposed above, at least because Dahlgren‘s collaborative object detection would help to provide object detection request into Wu’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing object detection request would help to improve user experience using the device.
In regard to claim 2, Wu and Dahlgren disclose The object detection sensor as claimed in claim 1,
Wu disclose wherein the predetermined detection signal and the test response signal are outputted through a same signal line. (Fig. 1, [0215]-[0217] the sensor device maybe a transceiver)
In regard to claim 3, Wu and Dahlgren disclose The object detection sensor as claimed in claim 1,
Wu disclose wherein the processor is configured to execute instead of the learning mode, a manual setting mode in which a characteristic of the test signal is manually set. ([0101] [0110]-[0136] [0139]-[0142] [0187] the characteristic change can be automatic or manual)
In regard to claim 4, Wu and Dahlgren disclose The object detection sensor as claimed in claim 1,
Wu disclose further comprising at least one of a switch installed in the object detection sensor, ([0071][0090] [0163] switch in the sensor) a wireless reception unit configured to receive a radio signal from a predetermined wireless device, ([0066]-[0075] receive radio signal from the wireless device) a network reception unit configured to receive a transmission signal from a predetermined network device, and the sensor configured to detect a predetermined motion of a human body. ([0055]-[0082][0137] [0147][0159]-[0162] [0241]-[0246] network to receive the tx signal from a network device and sensor to detect the motion of the human body)
In regard to claim 5, Wu and Dahlgren disclose The object detection sensor as claimed in claim 1,
Wu disclose wherein the processor is configured to, when the processor fails to receive the test signal, or when the processor verifies or determines that the estimated characteristic of the test signal is not included in the test signal received, an abnormal state is determined, and provide an output indicating abnormality. ([0187]-[0197] error can be identified based on the signal quality condition, object, object characteristics, object movement/location, etc. signal received and a error condition can be identified)
In regard to claim 6, Wu and Dahlgren disclose The object detection sensor as claimed in claim 1,
Wu disclose wherein the test signal is a pulse wave signal. ([071] [0219] [0234]-[0247] pulse wave signal)
In regard to claim 7, Wu and Dahlgren disclose The object detection sensor as claimed in claim 1,
Wu disclose wherein the characteristic of the test signal estimated is stored in a memory in the object detection sensor. ([0096] [0110]-[0133] [0136]-[0138] [0146][0148] [0157] [0197]-[0202] estimate a characteristic (statistical distribution, etc. is stored at the memory of the sensor device)
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (Wu) US 2022/0026530 and Dahlgren et al. (Dahlgren) US 2022/0294971 as applied to claim 1, further n view of SÖDERQVIST US 2021/0180384
In regard to claim 8, Wu and Dahlgren disclose An automatic door system comprising: the object detection sensor as claimed in claim 1, (Fig. 1, [0055]-[0082][0137] [0147][0159]-[0162] [0241]-[0246] remote sensor device detect user events and receive radar signal of a person, and output the raw signal to gait cube extraction module of the human recognition system, access control of the automatic door)
an automatic door control device which is the external object detection device configured to detect a human body which is the target object in response to input of the predetermined detection signal from the object detection sensor, (Fig. 1, [0042] [0055]-[0082][0137] [0147][0159]-[0162] [0241]-[0246] remote sensor device detect user events and receive radar signal of a person to detect the person, remote sensor device is external to 120/130 for access control of the automatic door)
But Wu and Dahlgren fail to explicitly disclose “and an automatic door including a door leaf configured to open and close automatically and a door engine configured to cause the door leaf to open and close, wherein the automatic door control device controls the door engine on the basis of a result of detection of the human body.”
SÖDERQVIST disclose and an automatic door including a door leaf configured to open and close automatically and a door engine configured to cause the door leaf to open and close, wherein the automatic door control device controls the door engine on the basis of a result of detection of the human body. ([013]-[0020] [0045]-[0057] the door include door leaf to open and close automatically and control to cause the door leaf to open and close based on the detection result of the human body)
It would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made to incorporate SÖDERQVIST‘s automatic door control system into Dahlgren and Wu’s invention as they are related to the same field endeavor of human recognition. The motivation to combine these arts, as proposed above, at least because SÖDERQVIST‘s automatic door control system would help to provide more application scenarios into Dahlgren and Wu’s system. Therefore it would have been obvious to one having ordinary skill in the art before the effective filing data of the claimed invention was made that providing more application scenarios would help to improve user experience using the device.
Response to Arguments
Applicant’s arguments with respect to claims 1-8 filed on 12/11/2025 have been considered but are moot because the arguments do not apply to the current rejection.
With respect to newly added claim 9, it is identical to the original claim 1, the
applicant's arguments filed on 12/11/2025 have been fully considered but they are not persuasive.
With respect to claim 9, the applicant argues Wu fail to disclose “start a learning mode, determine a state of the test signal in the learning mode, estimate a characteristic of the test signal from the state determined of the test signal in the learning mode, verify whether the characteristic estimated of the test signal is included in the test signal received in the learning mode.” The examiner respectfully disagrees. [0168] of Wu disclose “an operation may be applied to data” and the learning can be the operation and [0186]-[0187] of Wu further disclose “Any threshold may be pre-determined, adaptively (and/or dynamically) determined and/or determined by a finite state machine. The adaptive determination may be based on time, space, location, antenna, path, link, state, battery life, remaining battery life, available power, available computational resources, available network bandwidth, etc. A threshold to be applied to a test statistics to differentiate two events (or two conditions, or two situations, or two states), A and B, may be determined. Data (e.g. CI, channel state information (CSI), power parameter) may be collected under A and/or under B in a training situation.” The training or learning can be started based on a threshold to events, conditions, states, etc. which according to some criteria. [0130] etc. of Wu “ The current event may also be associated with the “unknown event” if none of the events achieve an overall mismatch cost lower than a second threshold T2. The current event may be associated with at least one of: the known event, the unknown event and/or the another event, based on the mismatch cost and additional mismatch cost associated with at least one additional section of the first TSCI and at least one additional section of the second TSCI. The known events may comprise at least one of: a door closed event, door open event, window closed event, window open event, multi-state event, on-state event, off-state event, intermediate state event, continuous state event, discrete state event, human-present event, human-absent event, sign-of-life-present event, and/or a sign-of-life-absent event.” The states of the data can be can be identified. And [0146], etc. of Wu disclose estimate a characteristic of the signal from the determined state in the learning mode and [0244] of Wu disclose “Then the gait cube extraction module 120 can extract the spectrogram around the person 101 and therefore constructs the Doppler (or speed) dimension of the gait cubes for cycle extraction. Further, the gait cube extraction module 120 segments the data in time domain, with respect to the extracted gait cycles each with a single step, and removes unstable walking data by gait cycle validation or verification. Consecutive valid steps are aligned together to construct the μR-μD-T gait cubes. In some embodiments, dimensionality reduction is performed on the gait cube data to remove unnecessary and redundant information for gait recognition. The resulted gait cube 131 represents the reshaped μD and μR signatures at different distances from the transmitter and/or the receiver, which are aligned in range domain with respect to human torso, segmented in time domain with respect to walking cycles (steps), and cropped in frequency domain to include maximum signal content and [0325] of Wu disclose “At step s8, using the extracted steps, and walking cycles, the system can verify validity of each step. Gait cycle validation may use multiple static and dynamic thresholds to remove acceleration and deceleration steps. This may include sub-steps s8a and s8b.” Therefore Wu disclose verify the validity of the estimation of the characteristic (such as a speed, the speed based on the trace, etc. ) from the signal received in the learning mode. Please note that the words testing, learning or training or characteristic of the test signal are very broad terms, please further clarify the claim limitations using functional description language to help move forward the prosecution. Therefore, the applicant’s argument is not persuasive.
Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e. a characteristic matching operation of test signals based on electric characteristics between equipment) are not recited in the rejected claim(s). Therefore Wu discloses the recited claim limitation of claim 9. Therefore, the applicant’s argument is not persuasive.
Please further define what kind of testing it does, what are the characteristic of the testing signal, what are the characteristic matching operation of test signals based on what electric characteristics between equipment etc. to help move forward the prosecution.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XUYANG XIA whose telephone number is (571)270-3045. The examiner can normally be reached Monday-Friday 8am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 571-272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
XUYANG XIA
Primary Examiner
Art Unit 2143
/XUYANG XIA/Primary Examiner, Art Unit 2143