DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office Action is in response to Amendments and Remarks filed on 01/21/2026 for application number 18/692,133 filed on 03/14/2024, in which claims 1-4 & 6-11 were originally presented for examination. Claim(s) 1-4 & 6-11 is/are currently amended, and pending.
Priority
Acknowledgment is made of applicant’s claim this application to be a 371 of PCT/JP2022/032867, filed on 08/31/2022.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/14/2024 has been received and considered.
Examiner Notes
Examiner cites particular paragraphs (or columns and lines) in the references as applied to Applicant’s claims for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. The prompt development of a clear issue requires that the replies of the Applicant meet the objections to and rejections of the claims. Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP §2163.06. Applicant is reminded that the Examiner is entitled to give the Broadest Reasonable Interpretation (BRI) to the language of the claims. Furthermore, the Examiner is not limited to Applicant’s definition which is not specifically set forth in the claims. See MPEP §2111.01.
Response to Arguments
Arguments filed on 01/21/2026 have been fully considered and are addressed as follows:
Regarding the Claim Objections: The claim objection(s) is/are withdrawn, as the amended Claim(s) 8 filed on 01/21/2026 has/have properly addressed the claim(s) informality objection(s) recited in the Non-Final Office Action mailed on 10/23/2025. However, applicant’s amendment necessitated the new ground of claim(s) objection(s) presented below.
Regarding the claim rejections under 35 USC §102(a)(1): Applicant’s arguments regarding the rejections of the claims 1-4 & 8-11 as being clearly anticipated by the prior art of Yoshida (US-2021/0248393-A1) have been fully considered. However, those arguments are not persuasive.
Applicant asserts that:
“Yoshida does not teach or suggest travel sound recognizability assessment …
Yoshida does not teach the claimed risk assessment structure …
Yoshida's risk assessment is based on the presence and characteristics of detected external sounds, not on the vehicle's acoustic visibility to external persons.”
(see Remarks pages 9-11; emphasis added)
The examiner respectfully disagrees. Examiner notes that Applicant’s arguments are all focusing on new limitations added to the amended base claims 1, 9, 10 & 11 apparently to overcome the current anticipation rejection under §102(a)(1) as recited in the Non-Final office action mailed on 10/23/2025. Those arguments are rendered moot in light of the new grounds of rejection outlined below, which were necessitated by the applicant’s amendment, i.e., Applicant’s arguments and amendments have been addressed in the new rejection outlined below.
In response to Applicant’s argument that the references fail to show certain features of applicant's invention, it is noted that the features upon which applicant relies (i.e., sound recognizability assessment and/or acoustic visibility to external persons) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
For at least the foregoing reasons, and the rejections outlined below, the prior art rejections are maintained.
Claim Objections
Claims 7 & 8 are objected to because of the following informalities:
Claim 7 recites “the information regarding a state of continuous” in line 9. It should be “the information regarding the state of continuous”.
Claim 8 recites “a volume of the travel sound” in line 3. It should be “the volume of the travel sound”. See “a volume of a travel sound of a subject vehicle” in line 11 of the currently amended claim 1 filed on 01/21/2026.
Appropriate correction is required.
Claim Rejections - 35 USC §112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
The amended base claims 2, 6 & 7 are rejected under 35 USC §112(a) or 35 USC §112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement.
The claims contain subject matter which was not described in the Specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 USC §112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
The limitation(s) “a volume rate of the travel sound of the subject vehicle” currently recited in line 3 of the amended claim 2 are not supported.
Applicant(s) Remarks did not note any support in Assignee’s Specification and/or Figures. The Examiner could not find any support for these limitations in the Specification or the Drawings. Accordingly, the said limitations (and any referral to or dependency on any of these two limitations) are new matters introduced to claim 2 without any support in the Specification.
For at least the foregoing reasons, the rejections under §112(a), as failing to comply with the written description requirement, has been issued as outlined above. Applicant is reminded that for the purpose of prior art examination, the Examiner is entitled to give the Broadest Reasonable Interpretation (BRI) to the language of these claim limitations.
Claim Rejections – 35 USC §101
35 USC §101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4 & 5-11 are rejected under 35 USC §101 because the claimed invention is directed to an abstract idea without significantly more.
The determination of whether a claim recites patent ineligible subject matter is a two-step inquiry.
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), See MPEP 2106.03, or
STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: See MPEP 2106.04
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP 2106.04(II)(A)(1)
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP 2106.04(II)(A)(2)
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP 2106.05
Claim 1, a driver assistance system configured to assist in driving a subject vehicle, the driver assistance system comprising:
one or more processors; and one or more memories communicably coupled to the one or more processors, wherein the processors are configured to [applying the abstract idea using generic computing module]:
detect a blind spot region created by an obstacle present around the subject vehicle [mental process/step];
acquire travel sound related information, comprising (i) travel sound information indicating a volume of a travel sound of a subject vehicle, and (ii) surrounding environmental sound information regarding sounds in the blind spot region [pre-solution activity (data gathering) using generic sensors];
determine whether or not the travel sound of the subject vehicle is recognizable by a person in the blind spot region, based on the travel sound related information acquired, and generating a travel sound unrecognizability risk indicating whether the travel sound of the subject vehicle is recognizable by the person in the blind spot region [mental process/step];
generate risk distribution data indicating risk distribution in which a risk associated with the obstacle present around the subject vehicle, a risk associated with the blind spot region, and the travel sound unrecognizability risk are reflected [mental process/step]; and
set a driving condition of the subject vehicle based on the risk distribution data [mental process/step].
101 Analysis - Step 1: Statutory category – Yes
The claim recites a driver assistance system (a vehicle, a method, comprising at least one step, or non-transitory tangible recording medium containing a computer program of base claims 9, 10, or 11 respectively) . The claim(s) falls within one of the four statutory categories. See MPEP 2106.03
Step 2A Prong one evaluation: Judicial Exception – Yes – Mental processes
In Step 2A, Prong one of the 2019 Patent Eligibility Guidance (PEG), a claim is to be analyzed to determine whether it recites subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) mental processes, and/or c) certain methods of organizing human activity.
The Office submits that the foregoing bolded limitation(s) constitutes judicial exceptions in terms of “mental processes” because under its broadest reasonable interpretation, the limitations can be “performed in the human mind, or by a human using a pen and paper”. See MPEP 2106.04(a)(2)(III)
The claim recites the limitation of detect a blind spot region created by an obstacle present around the subject vehicle; determine whether or not the travel sound of the subject vehicle is recognizable by a person in the blind spot region, based on the travel sound related information acquired, and generating a travel sound unrecognizability risk indicating whether the travel sound of the subject vehicle is recognizable by the person in the blind spot region; generate risk distribution data indicating risk distribution in which a risk associated with the obstacle present around the subject vehicle, a risk associated with the blind spot region, and the travel sound unrecognizability risk are reflected; and set a driving condition of the subject vehicle based on the risk distribution data.
These limitation, as drafted, are simple processes that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of “one or more processors”. That is, other than reciting “processors” nothing in the claim elements precludes the steps from practically being performed in the mind. For example, but for the “processors” language, the claim encompasses a person looking at data collected and forming a simple judgement or conclusion in mind or by a human using a pen and paper. The mere nominal recitation of by “one or more processors” does not take the claim limitations out of the mental process grouping. Thus, the claim recites a mental process.
Step 2A Prong two evaluation: Practical Application - No
In Step 2A, Prong two of the 2019 PEG, a claim is to be evaluated whether, as a whole, it integrates the recited judicial exception into a practical application. As noted in MPEP 2106.04(d), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. The courts have indicated that additional elements such as: merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” The Office submits that the foregoing underlined limitation(s) recite additional elements that do not integrate the recited judicial exception into a practical application.
The claim recites additional element(s) “one or more processors” and/or steps of acquire travel sound related information, comprising (i) travel sound information indicating a volume of a travel sound of a subject vehicle, and (ii) surrounding environmental sound information regarding sounds in the blind spot region.
The acquiring step(s) is/are recited at a high level of generality (i.e. as a general means of gathering request(s) with a first set of metadata for use in the detecting, determining, generating and/or setting steps), and amount to mere data gathering, which is a form of insignificant pre-solution activity.
The “one or more processors” element(s) merely describes how to generally and merely automates the detecting, determining, generating and/or setting steps, therefore acting as a generic computer to perform the abstract idea and/or “apply” the otherwise mental judgements using a generic or general-purpose processor, i.e. a computer. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Step 2B evaluation: Inventive concept - No
In Step 2B of the 2019 PEG, a claim is to be evaluated as to whether the claim, as a whole, amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. See MPEP 2106.05(f).
Under the 2019 PEG, a conclusion that an additional element is insignificant extra- solution activity in Step 2A should be re-evaluated in Step 2B. Here, the receiving and transmitting step(s) and the tracking data broker element(s) were considered to be insignificant extra-solution activity in Step 2A, and thus they are re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field.
The “one or more processors” element(s) is/are recited at a high level of generality and is merely automates the detecting, determining, generating and/or setting steps, therefore acting as a generic computer module to perform the abstract idea and/or “apply” the otherwise mental judgements using a generic or general-purpose processor, i.e. a server (20) of Fig. 24 & Specification PG Pub ¶¶358-369.
MPEP 2106.05(d)(II), indicate that mere collection or receipt of data over a network, i.e., “acquire travel sound related information …” step is/are a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here), See “sound collection device such as a small microphone” in Specification PG pub ¶128, and amounts to mere pre-solution displaying, which is a form of insignificant extra-solution activity. Accordingly, a conclusion that the acquiring travel sound related information step(s) and the one or more processors element(s) is/are well-understood, routine, conventional activity is supported under Berkheimer. Thus, the claim is ineligible.
Independent claims 9, 10 & 11, recite similar limitations performed by the system of claim 1. Therefore, claims 9, 10 & 11 are rejected under the same rationales used in the rejections of claim 1 as outlined above.
Dependent claims 2-4 & 6-8 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application and amounts to mere input and/or output data manipulation. Therefore, dependent claims 2-4 & 6-8 are not patent eligible under the same rationale as provided for in the rejection of claim 1.
Thus, claims 1-4 & 6-11 are ineligible under 35 USC §101.
Claim Rejections - 35 USC §102
In the event the determination of the status of the application as subject to AIA 35 USC §102 and §103 (or as subject to pre-AIA 35 USC §102 and §103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 USC §102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4 & 8-11 are rejected under 35 USC §102(a)(1) as being clearly anticipated by PG Pub. No. US-2021/0248393-A1 by Kaoru Yoshida (hereinafter “Yoshida”), which is found in the IDS submitted on 03/14/2024
As per claim 1, Yoshida discloses a driver assistance system configured to assist in driving a subject vehicle (Yoshida, in at least Fig(s). 4 & 5 and ¶¶66, 84 & 87, discloses the vehicle 12 is equipped with a driver assistance function, wherein the blind spot information acquisition device 10 is configured to activate a driver assistance function to keep the vehicle 12 from moving closer to the position where the sound is occurring), the driver assistance system comprising:
one or more processors (Yoshida, in at least Fig(s). 4 & 5 and ¶47, discloses functional configurations are realized as a result of the CPU 22); and one or more memories communicably coupled to the one or more processors (Yoshida, in at least Fig(s). 4 & 5 and ¶47, discloses functional configurations are realized as a result of the CPU 22 reading and executing the programs stored in the ROM 24 or the storage 28), wherein the processors are configured to
detect a blind spot region created by an obstacle present around the subject vehicle (Yoshida, in at least Fig(s). 4 & 5 and ¶¶38, 66, 84 & 87-89, discloses the blind spot information acquisition device 10 to acquire information relating to the pedestrian P2 in this blind spot area, wherein the blind spot information acquisition device 10 is configured to activate a driver assistance function to keep the vehicle 12 from moving closer to the position where the sound is occurring);
acquire travel sound related information comprising (i) travel sound information indicating a volume of a travel sound of a subject vehicle, and (ii) surrounding environmental sound information regarding sounds in the blind spot region (Yoshida, in at least Fig(s). 4 [reproduced here for convenience] & 5 and ¶¶36, 47 & 50, discloses blind spot information acquisition device 10, which includes microphone 18 serving as sound pickup devices, wherein the blind spot information acquisition device 10 is configured to include, as functional configurations, a sound acquisition component 40. Yoshida discloses, because the sound is picked up by the microphones 18 of the vehicle 12 that is traveling, the sound occurrence position inference component 42 may infer the distance to the position where the sound occurred on the basis of the frequency of the sound that has been picked up and the change in its volume over time);
PNG
media_image1.png
684
830
media_image1.png
Greyscale
Yoshida’s Fig. 4
determine whether or not the travel sound of the subject vehicle is recognizable by a person in the blind spot region, based on the travel sound related information acquired, and generating a travel sound unrecognizability risk indicating whether the travel sound of the subject vehicle is recognizable by the person in the blind spot region (Yoshida, in at least Fig(s). 4 & 5 and ¶¶36, 38, 47-57, 66-67, 80 & 87, discloses the CPU 22 judges whether or not the sound sources are inferable [i.e., determine whether or not the travel sound of the subject vehicle is recognizable by a person], wherein information about sound sources having the potential to obstruct driving are registered beforehand in the storage 28 and the CPU 22 infers the sound sources by comparing the registered sound source information and the sounds that have been acquired by the sound acquisition component 40. Yoshida further discloses the vehicle 12 to which the blind spot information acquisition device 10 acquires information relating to the pedestrian P2 in this blind spot area, wherein the sound source position identification component 46 recognizes that there is a sound occurring in a blind spot in a case in which a sound source cannot be identified from the information that has been acquired by the object acquisition component 44 on the basis of the positions of occurrence of the sounds that have been inferred by the sound occurrence position inference component 42);
generate risk distribution data indicating risk distribution in which a risk associated with the obstacle present around the subject vehicle, a risk associated with the blind spot region, and the travel sound unrecognizability risk are reflected (Yoshida, in at least Fig(s). 4 & 5 and ¶¶36, 47-57, 80 & 87, discloses the warning level decision component 50 decides the level of the warning when warning the occupant in a case in which a sound occurring in a blind spot has been verified by the sound source position identification component 46 [i.e., risk associated with the obstacle present around the subject vehicle], raises the level of the warning in a case in which it has been inferred by the sound source inference component 48 that the sound source in the blind spot is a child as compared to a case where it has been inferred that the sound source is an adult [i.e., travel sound unrecognizability risk], and raise the level of the warning in a case in which it has been inferred by the sound occurrence position inference component 42 that the sound source in the blind spot is moving closer to the vehicle 12 as compared to a case where the sound source is moving away from the vehicle 12 [i.e., risk associated with the blind spot region]. Yoshida further discloses the warning component 52 issues a warning to the occupant in a case in which a sound occurring in a blind spot has been recognized by the sound source position identification component 46, and warns the occupant by displaying on the monitor 34 content that calls the occupant's attention to the blind spot, e.g., the warning component 52 warns the occupant by outputting an alarm sound from the speaker 36. Yoshida also discloses the CPU 22 judges whether or not the sound sources are inferable, wherein information about sound sources having the potential to obstruct driving are registered beforehand in the storage 28 and the CPU 22 infers the sound sources by comparing the registered sound source information and the sounds that have been acquired by the sound acquisition component 40); and
set a driving condition of the subject vehicle based on the risk distribution data (Yoshida, in at least Fig(s). 4 & 5 and ¶¶36, 47-57 & 87, discloses the vehicle 12 is equipped with a driver assistance function, wherein the blind spot information acquisition device 10 is configured to activate a driver assistance function to keep the vehicle 12 from moving closer to the position where the sound is occurring, wherein the blind spot information acquisition device 10 is configured to include, as functional configurations, a sound acquisition component 40, a warning level decision component 50, and a warning component 52, wherein the warning level decision component 50 decides the level of the warning when warning the occupant in a case in which a sound occurring in a blind spot has been verified by the sound source position identification component 46. Yoshida further discloses the warning component 52 issues a warning to the occupant in a case in which a sound occurring in a blind spot has been recognized by the sound source position identification component 46, and warns the occupant by displaying on the monitor 34 content that calls the occupant's attention to the blind spot, e.g., the warning component 52 warns the occupant by outputting an alarm sound from the speaker 36).
As per claim 2, Yoshida discloses the driver assistance system according to claim 1, accordingly, the rejection of claim 1 above is incorporated. Yoshida further discloses wherein the travel sound related information includes the travel sound information and the surrounding environmental sound information, the travel sound information indicating a volume rate of the travel sound of the subject vehicle, and the surrounding environmental sound information including one or both of a volume and a kind of a surrounding environmental sound in the blind spot region (Yoshida, in at least Fig(s). 4-7 and ¶¶36, 47 & 62, discloses Blind Spot Information Acquisition Device 10, which includes Microphone 18 serving as sound pickup devices, wherein the blind spot information acquisition device 10 is configured to include, as functional configurations, a sound acquisition component 40. Yoshida further discloses, because the sound is picked up by the microphones 18 of the vehicle 12 that is traveling, the sound occurrence position inference component 42 may infer the distance to the position where the sound occurred on the basis of the frequency of the sound that has been picked up and the change in its volume over time [i.e., volume rate]. Yoshida also discloses acquire sounds in area around vehicle 12, as illustrated in Fig. 6 S102 the CPU 22 acquires the sounds in the area around the vehicle 12 (sound acquisition step). Specifically, the CPU 22 acquires, by means of the function of the sound acquisition component 40, the sounds that have been picked up by the microphones 18).
As per claim 3, Yoshida discloses the driver assistance system according to claim 1, accordingly, the rejection of claim 1 above is incorporated. Yoshida further discloses wherein the processors are configured to:
determine, when determining whether the travel sound of the subject vehicle is recognizable, a state of recognizability of the travel sound of the subject vehicle; and
change, when setting the driving condition of the subject vehicle, a degree of deceleration of the subject vehicle or a degree of change in a track in a direction away from the blind spot region, in accordance with the state of recognizability determined (Yoshida, in at least Fig(s). 4-7 and ¶¶62, 66 & 84, discloses the CPU 22 infers the directions of the positions of occurrence of the sounds relative to the vehicle 12 and the distances from the vehicle 12 to the positions of occurrence of the sounds. Yoshida further discloses the CPU 22 indicates on the monitor 34 the fact that a person is standing in front of the vehicle 12, wherein in a case in which the vehicle 12 is equipped with a driver assistance function, the CPU 22 may control the brakes to reduce the speed of the vehicle 12).
As per claim 4, Yoshida discloses the driver assistance system according to claim 1, accordingly, the rejection of claim 1 above is incorporated. Yoshida further discloses wherein the processors are configured to:
estimate presence or absence of a rush-out target object that possibly rushes out in front of the subject vehicle; and
when the presence of the rush-out target object in the blind spot region is estimated, determine, when determining whether the travel sound of the subject vehicle is recognizable, whether or not the travel sound of the subject vehicle is recognizable by the rush-out target object in the blind spot region (Yoshida, in at least Fig(s). 1-3 and ¶¶8-11, 48, 54, 57, 71, discloses the warning component 52 issues a warning to the occupant in a case in which a sound occurring in a blind spot has been recognized by the sound source position identification component 46. Because of this, the occupant's attention can be directed to a person or a vehicle, for example, in a blind spot).
As per claim 5, Cancelled
As per claim 8, Yoshida discloses the driver assistance system according to claim 1, accordingly, the rejection of claim 1 above is incorporated. Yoshida further discloses wherein the processors are configured to
set the driving condition to change a volume of the travel sound of the subject vehicle (Yoshida, in at least Fig(s). 4-7 and ¶¶36, 47 & 62, discloses, because the sound is picked up by the microphones 18 of the vehicle 12 that is traveling, the sound occurrence position inference component 42 may infer the distance to the position where the sound occurred on the basis of the frequency of the sound that has been picked up and the change in its volume over time).
As per claims 9-11, the claims are directed towards a vehicle, a non-transitory recording medium, and a driver assistance method that recite similar limitations performed by the system of claim 1. The cited portions of Yoshida used in the rejection of claim 1 disclose the same steps to performed by the vehicle, the non-transitory recording medium & the driver assistance method of claims 9, 10 & 11, respectively. Therefore, claims 9-11 are rejected under the same rationales used in the rejection of claim 1 as outlined above.
Allowable Subject Matter
Claims 6 & 7 are objected to as being dependent upon a rejected base claims, i.e., 1 & 2, but would be allowable if rewritten to overcome the rejection(s) under 35 USC §101 and/or under 35 USC §112(a), set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See previously mailed PTO-892 form.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tarek Elarabi whose telephone number is (313)446-4911. The examiner can normally be reached on Monday thru Thursday; 6:00 AM - 4:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Nolan can be reached on (571)270-7016. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair.
Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or (571)272-1000.
/Tarek Elarabi/Primary Examiner, Art Unit 3661