Notice of Pre-AIA or AIA Status
Specification
The title of the invention, “INFORMATION PROCESSING APPARATUS, MOBILE OBJECT, CONTROL METHOD THEREOF, AND STORAGE MEDIUM” is not descriptive and could apply to nearly all inventions submitted to the Examiners Art Unit. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is not required but merely suggested as a possibility: “DETECTING A USER BASED ON USER UTTERANCE DATA AND IMAGE DATA”
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
Claim 8 includes:
(1) “a communication unit configured to communicate with a communication device of a user”
(2) “an imaging unit configured to image a surrounding portion of the mobile object”
Under the three-prong test, the above language will be interpreted under 112(f) because:
(A) Each of limitations (1)-(2) recited above use the generic placeholder nonce term, i.e., “device”, “terminal” for performing a claimed function. See MPEP 2181, 1A (“The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, paragraph 6: "mechanism for," "module for," "device for," "unit for," "component for," "element for," "member for," "apparatus for," "machine for," or "system for." Welker Bearing Co., v. PHD, Inc., 550 F.3d 1090, 1096, 89 USPQ2d 1289, 1293-94 (Fed. Cir. 2008”).
(B) each of the phrases following the underlined portion in items (1)-(2) constitute functional language modifying the generic terms in prong (A), respectively.
(C) With respect to (1)-(2), the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
Specifically, the corresponding structure for (1)-(2), respectively, includes:
(1) Spec. ¶¶ 37, 45.
(2) Spec. ¶¶ 28, 33.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
1-9are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more.
In sum, claims 1-9 are rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception to patentability (i.e., a law of nature, a natural phenomenon, or an abstract idea) and do not include an inventive concept that is something “significantly more” than the judicial exception under the January 2019 patentable subject matter eligibility guidance (2019 PEG) analysis which follows.
Revised Guidance Step 2A – Prong 1
Under the 2019 PEG step 2A, Prong 1 analysis, it must be determined whether the claims recite an abstract idea that falls within one or more designated categories of patent ineligible subject matter (i.e., organizing human activity, mathematical concepts, and mental processes) that amount to a judicial exception to patentability.
Here, with respect to independent claims 1 and 7-9, the claims recite the abstract idea of:
specifying a region according to a mark included in utterance information;
acquiring a movement direction of the user from the first utterance information, and set a probability distribution indicating probabilities that the user exists to each of a plurality of regions, wherein the plurality of regions are areas that have been divided from the predetermined region
acquiring second utterance information
analyzing an image and extracting features of a person
presuming a user based on the set probability distribution, the extracted feature of one or more persons, and the second utterance information
Specifically, a mental process, that can be performed in the human mind since the above limitations could alternatively be performed in the human mind or with the aid of pen and paper. This conclusion follows from CyberSource Corp. v. Retail Decisions, Inc., where our reviewing court held that section 101 did not embrace a process defined simply as using a computer to perform a series of mental steps that people, aware of each step, can and regularly do perform in their heads. 654 F.3d 1366, 1373 (Fed. Cir. 2011); see also In re Grams, 888 F.2d 835, 840–41 (Fed. Cir. 1989); In re Meyer, 688 F.2d 789, 794–95 (CCPA 1982); Elec. Power Group, LLC v. Alstom S.A., 830 F. 3d 1350, 1354–1354 (Fed. Cir. 2016) (“we have treated analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, as essentially mental processes within the abstract-idea category”).
For example, a human could perform the above limitation entirely mentally since the limitations amount to comparing and mentally processing data. See, e.g., MPEP 2106.04(a)(2), III, A (“claims do recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include . . . a claim to collecting and comparing known information (claim 1), which are steps that can be practically performed in the human mind, Classen Immunotherapies, Inc. v. Biogen IDEC, 659 F.3d 1057, 1067, 100 USPQ2d 1492, 1500 (Fed. Cir. 2011)”).
For example, a human viewing a user can mentally recognize a mark in an image or a real life scene corresponding to a spoken utterance, can mentally determine a movement direction of the user based on utterance information (i.e., I am heading towards the black lightpost), can mentally impose a simple grid while viewing the user and estimate a probability the user will enter the grid around them based on the fact they will be heading towards a lightpost (i.e., a human viewing the approximate scene in Lim Fig. 8, detailed below, there a user, for example is standing at position 804) and can mentally recognize a feature of a person when viewing an image or a real life scene and subsequently determine who the user is based on the recognized feature (i.e., a human mind upon hearing the utterance, I have a mustache, could mentally distinguish between viewed persons).
Furthermore, mental processes remain unpatentable even when automated to reduce the burden on the user of what once could have been done with pen and paper. See CyberSource, 654 F.3d at 1375 (“That purely mental processes can be unpatentable, even when performed by a computer, was precisely the holding of the Supreme Court in Gottschalk v. Benson.”).
In addition, the above cited limitation recites the abstract idea of a mathematical concept in addition to being a mental process since the limitation invokes “set a probability distribution”. See October 2019 Update: Subject Matter eligibility p. 3-4 “Mathematical Relationships” and “Mathematical Calculations” (“A mathematical relationship may be expressed in words or using mathematical symbols . . . [t]here is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word “calculating” in order to be considered a mathematical calculation. For example, a step of “determining” a variable or number using mathematical methods or “performing” a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.”) citing Diamond v. Diehr, Gottschalk v. Benson, Parker v. Flook, and Burnett v. Panasonic Corp (“using a formula to convert geospatial coordinates into natural numbers”).
Revised Guidance Step 2A – Prong 2
Under the 2019 PEG step 2A, Prong 2 analysis, the identified abstract idea to which the claim is directed does not include limitations that integrate the abstract idea into a practical application, since the recited features of the abstract idea are being applied on a computer or computing device or via software programming that is simply being used as a tool (“apply it”) to implement the abstract idea. (See, e.g., MPEP §2106.05(f)). This follows conclusion follows from the claim limitations which only recite a generic “storage device” and “at least one processor” (claim 1), “communication unit”, imaging unit” (claim 8) outside of the abstract idea.
In addition, merely “[u]sing a computer to accelerate an ineligible mental process does not make that process patent-eligible.” Bancorp Servs., L.L.C. v. Sun Life Assur. Co. of Canada (U.S.), 687 F.3d 1266, 1279 (Fed. Cir. 2012); see also CLS Bank Int’l v. Alice Corp. Pty. Ltd., 717 F.3d 1269, 1286 (Fed. Cir. 2013) (en banc) (“simply appending generic computer functionality to lend speed or efficiency to the performance of an otherwise abstract concept does not meaningfully limit claim scope for purposes of patent eligibility.”), aff’d, 573 U.S. 208 (2014). Accordingly, the additional element of a controller does not transform the abstract idea into a practical application of the abstract idea.
In addition, acquiring steps (i.e., “acquire, from a communication device of a user, first utterance information by the user”, “acquire a captured image captured around the specified predetermined region”) constitute insignificant pre-solution activity that merely gathers data and, therefore, do not integrate the exception into a practical application. See In re Bilski, 545 F.3d 943, 963 (Fed. Cir. 2008) (en banc), aff’d on other grounds, 561 U.S. 593 (2010) (characterizing data gathering steps as insignificant extra-solution activity); see also CyberSource, 654 F.3d at 1371–72 (noting that even if some physical steps are required to obtain information from a database (e.g., entering a query via a keyboard, clicking a mouse), such data-gathering steps cannot alone confer patentability); OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering). Accord Guidance, 84 Fed. Reg. at 55 (citing MPEP § 2106.05(g)).
Revised Guidance Step 2B
Under the 2019 PEG step 2B analysis, the additional elements are evaluated to determine whether they amount to something “significantly more” than the recited abstract idea. (i.e., an innovative concept). Here, the additional elements, such as a “storage device” and “at least one processor” (claim 1), “communication unit”, imaging unit” (claim 8) do not amount to an innovative concept since, as stated above in the step 2A, Prong 2 analysis, the claims are simply using the additional elements as a tool to carry out the abstract idea (i.e., “apply it”) on a computer or computing device and/or via software programming (See, e.g., MPEP §2106.05(f)). The additional elements are specified at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved. See, e.g., MPEP §2106.05 I.A; Alice, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”). Thus, these elements, taken individually or together, do not amount to “significantly more” than the abstract ideas themselves.
The additional elements of the dependent claims merely refine and further limit the abstract idea of the independent claims and do not add any feature that is an “inventive concept” which cures the deficiencies of their respective parent claim under the 2019 PEG analysis. None of the dependent claims considered individually, including their respective limitations, include an “inventive concept” of some additional element or combination of elements sufficient to ensure that the claims in practice amount to something “significantly more” than patent-ineligible subject matter to which the claims are directed.
The elements of the instant process steps when taken in combination do not offer substantially more than the sum of the functions of the elements when each is taken alone. The claims as a whole, do not amount to significantly more than the abstract idea itself because the claims do not effect an improvement to another technology or technical field; the claims do not amount to an improvement to the functioning of an electronic device itself which implements the abstract idea (e.g., the general purpose computer and/or the computer system which implements the process are not made more efficient or technologically improved); the claims do not perform a transformation or reduction of a particular article to a different state or thing (i.e., the claims do not use the abstract idea in the claimed process to bring about a physical change. See, e.g., Diamond v. Diehr, 450 U.S. 175 (1981), where a physical change, and thus patentability, was imparted by the claimed process; contrast, Parker v. Flook, 437 U.S. 584 (1978), where a physical change, and thus patentability, was not imparted by the claimed process); and the claims do not move beyond a general link of the use of the abstract idea to a particular technological environment (e.g., “A method of controlling a mobile object” claim 9).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over US 20220097734 to Limaye (Lim) in view of U.S. 20200103523 to Liu et al. (Liu)
With respect to claims 1 and 7-9, Lim discloses an information processing apparatus comprising:
a storage device that stores instructions; and
at least one processor that executes the instructions to:
(various system storage and processing devices as shown in FIG. 1 and Fig. 4-5 and corresponding description)
acquire, from a communication device of a user, first utterance information by the user;
(communication device 420, FIG. 4-7 and 9, i.e., user 422; 48 client computing device
420 . . . microphone; FIG. 7, prompt for semantic information 704, semantic information returned to vehicle via user device 706; semantic information can be utterance information by a user, i.e.,
specify a predetermined region according to a mark included in the first utterance information;
(FIG. 10, 1040, “Determine a specified location based on the semantic information received in response to the prompt”)
acquire a captured image captured around the specified predetermined region;
(1050, FIG. 10 “Identify one or more semantic markers for the specified location from the semantic information”)
acquire a movement direction of the user from the first utterance information;
(¶ 74 “select semantic information to define the specified location . . . trajectory of the users’ client computing device 420 towards the specified location, user location is tracked, i.e., ¶ 78, the semantic information is used to detect current location of the user such that a movement direction is acquired, i.e., ¶ 63 “The voice input may be, for example, the user saying "pick me up next to the traffic light pole.", i.e., the movement direction would be from current user location close to 804 toward light pole 226 as shown in FIG. 8)
acquire, from a communication device of the user, second utterance information by the user;
(¶¶ 64-65 parse voice input for information that includes reference to physical landmark and/or characteristics that may be used as a semantic marker . . . another prompt may be sent to the client device for additional user input; i.e., 77-78 semantic information sent from the client computing device 420 to the vehicle 100 may include characteristics of the user for the trip. For example, the user may input information related to physical characteristics of the user, clothing worn by the user, accessories held by the user, etc.”)
analyze the captured image and extract a feature of one or more persons and presume the user based on the extracted feature of one or more persons, and the second utterance information.
(¶¶ 77-78 “The vehicle's computing devices 110 may use the characteristics of the user to determine who the passenger to be picked up as the vehicle 100 approaches the specified location. The vehicle's computing devices 110 may adjust operation of the vehicle 100 to meet the determined passenger for pickup; 63-66; 74, claims 3-4)
However, Lim fails to explicitly disclose “set a probability distribution indicating probabilities that the user exists to each of a plurality of regions, wherein the plurality of regions are areas that have been divided from the predetermined region”
However, determining probability distributions that a person exists in each of a plurality of regions was well known in the art at the time of invention. For example, Liu, from the same field of endeavor, also discloses an AV tracking the location of a person (i.e., Fig. 1, vehicle 102, pedestrian 118, within sensed region 114) and setting a probability distribution indicating probabilities that a pedestrian exists to each of a plurality of regions (¶¶ 13, estimating spatial occupancy . . . spatial grid may be designated to indicate whether the respective cell is occupied (e.g., by a static or dynamic object) . . . occupancy status of each cell may be represented by one or more probabilities. For instance, each cell in the spatial grid may have a first probability of being occupied . . . probability of being free space) wherein the probability distribution for the occupancy grid is determined based at least on an acquired movement direction; 14 “Each adjacent cell having an occupancy probability above a threshold probability can be designated as being occupied . . . [i]n the case of a dynamic object, a radar track having a location and a velocity can be received and provided to a tracker of a vehicle computing device. The tracker outputs a bounding box representing a size, shape, and/or pose of the tracked dynamic object. The tracker may additionally or alternatively output a trajectory or vector representing a current direction of travel of the dynamic object”; 24, 36-37 cell probability can be based on any techniques or heuristics; 45-46 The tracker may additionally or alternatively output a trajectory or vector representing a current direction of travel of the dynamic object. At operation 318, one or more cells of the radar spatial grid associated with a region occupied by the bounding box are designated as being occupied by the dynamic object (e.g., dynamically occupied). In some examples, the cells of the radar spatial grid may further be labeled with a trajectory or vector (velocity and direction) of the dynamic object; 63, 77-78, claims 6 and 10-12)
Accordingly, in view of the combined teachings of Lim and Liu cited above, it would have been obvious to one of ordinary skill in the art at the time of effective filing date for the system of Lim to more explicitly calculate the location of the user using a set probability distribution indicating probabilities that the user exists to each of a plurality of regions, wherein the plurality of regions are areas that have been divided from the predetermined region such that the user presumption is further based on said probability distribution in order to increase the accuracy of tracking and differentiating the user, i.e., the input of a future trajectory can included in the Lim (i.e., user utterance indicating the change in position of FIG. 8, from initial position 804 to final position 808) can be included in the probability distribution of Liu to increase the probability that tracks cells that overlap that path. In addition, using the information of Lim combined with predicted trajectory data of Liu (¶ 78 “occupancy information to the prediction component 930 to generate predicted trajectories for one or more objects”) reduces occluded regions of a probability based occupancy grid, thereby improving safety.
With respect to claims 2, Lim in view of Liu fails to explicitly disclose processor instructions to generate third utterance information inquiring a feature of the user based on the feature of one or more persons extracted from the captured image, and send the third utterance information to the communication device of the user.
However, repeated iterations of already performed processes would have been an obvious modification to a PHOSITA at the time of effective filing for at least the reason that Lim at least suggests such as feature as Lim discloses: 1) inquiring a feature of the user 2) reviewing an image to determine if a user has a particular feature to identify them and 3) generating additional utterances when previous utterances are insufficient to identify the user
(Lim, ¶¶ 64-65 parse voice input for information that includes reference to physical landmark and/or characteristics that may be used as a semantic marker . . . another prompt may be sent to the client device for additional user input; i.e., 77-78 semantic information sent from the client computing device 420 to the vehicle 100 may include characteristics of the user for the trip. For example, the user may input information related to physical characteristics of the user, clothing worn by the user, accessories held by the user, etc.”). In addition, if an attempt to identify a person is unsuccessful there are a limited number of known options to achieve the intended result, wherein repeating the attempt to identify is the most obvious option. A PHOSITA would have recognized that generating a third utterance information by adding an additional inquiry of a feature of the user and sending it to the user would have yielded predictable results and resulted in an improved system, i.e., via the suggestions provided in Lim cited above. See MPEP 2143, obvious to try, example 9, Perfect Web Tech., Inc. v. InfoUSA, Inc., 587 F.3d 1324, 1328-29 (repeating a process that was more likely to result in a desired outcome is “simple logic” and "the final step [of the claimed invention] is merely the logical result of common sense application of the maxim ‘try, try again.’" is common sense and obvious).
With respect to claim 3, Lim in view of Liu, in view of the obviousness modification discussed above, disclose acquiring, from the communication device of the user, the second utterance information by the user after sending the third utterance information to the communication device of the user
(Lim, FIG. 7, prompt information 703, semantic information 706; ¶¶ 64-65 parse voice input for information that includes reference to physical landmark and/or characteristics that may be used as a semantic marker . . . another prompt may be sent to the client device for additional user input; i.e., 77-78 semantic information sent from the client computing device 420 to the vehicle 100 may include characteristics of the user for the trip. For example, the user may input information related to physical characteristics of the user, clothing worn by the user, accessories held by the user, etc.”)
With respect to claim 4, Lim in view of Liu, in view of the obviousness modification discussed above, disclose the second utterance information include a response related to the feature of the user for the third utterance information inquiring the feature of the use.
(Lim, FIG. 7, prompt information 703, semantic information 706; ¶¶ 64-65 parse voice input for information that includes reference to physical landmark and/or characteristics that may be used as a semantic marker . . . another prompt may be sent to the client device for additional user input; i.e., 77-78 semantic information sent from the client computing device 420 to the vehicle 100 may include characteristics of the user for the trip. For example, the user may input information related to physical characteristics of the user, clothing worn by the user, accessories held by the user, etc.”)
With respect to claim 5, Lim in view of Liu disclose presuming the user includes correcting the probability distribution further based on position information of the communication device.
(Lim, i.e., ¶¶ 78 specified location may also be based also on an updated user location. The location of the user's client computing device 420 determined using GPS and/or other location services at the user's client computing device 420; FIG. 7, prompt information 703, semantic information 706; ¶¶ 64-65 parse voice input for information that includes reference to physical landmark and/or characteristics that may be used as a semantic marker . . . another prompt may be sent to the client device for additional user input; i.e., 77-78 semantic information sent from the client computing device 420 to the vehicle 100 may include characteristics of the user for the trip. For example, the user may input information related to physical characteristics of the user, clothing worn by the user, accessories held by the user, etc.”; as modified by Liu: 13, estimating spatial occupancy . . . spatial grid may be designated to indicate whether the respective cell is occupied (e.g., by a static or dynamic object) . . . occupancy status of each cell may be represented by one or more probabilities. For instance, each cell in the spatial grid may have a first probability of being occupied . . . probability of being free space) wherein the probability distribution for the occupancy grid is determined based at least on an acquired movement direction; 14 “Each adjacent cell having an occupancy probability above a threshold probability can be designated as being occupied . . . [i]n the case of a dynamic object, a radar track having a location and a velocity can be received and provided to a tracker of a vehicle computing device. The tracker outputs a bounding box representing a size, shape, and/or pose of the tracked dynamic object. The tracker may additionally or alternatively output a trajectory or vector representing a current direction of travel of the dynamic object”; 24, 36-37 cell probability can be based on any techniques or heuristics; 45-46 The tracker may additionally or alternatively output a trajectory or vector representing a current direction of travel of the dynamic object. At operation 318, one or more cells of the radar spatial grid associated with a region occupied by the bounding box are designated as being occupied by the dynamic object (e.g., dynamically occupied). In some examples, the cells of the radar spatial grid may further be labeled with a trajectory or vector (velocity and direction) of the dynamic object; 63, 77-78, claims 6 and 10-12)
With respect to claim 6, Lim in view of Liu disclose the captured image captured around the specified predetermined region includes at least one of:
a captured image captured by a mobile object located around the user; and
a captured image a captured image captured by an imaging unit located around the mobile object.
(Lim: FIG. 8, 1150-1160, Fig. 11, ¶¶ 65 the vehicle's computing devices may use image processing to identify the one or more candidate locations in the photo or video that are at or near the pickup location; 63 camera 427 . . . photo 708; 82 using a perception system of the autonomous vehicle, one or more semantic markers may be identified for the specified location from the semantic information; Liu: ¶ 60 image sensor . . . track motion of object)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH J MALKOWSKI whose telephone number is (313)446-4854. The examiner can normally be reached 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached at 313-446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KENNETH J MALKOWSKI/Primary Examiner, Art Unit 3667