DETAILED ACTION
This action is responsive to the application filed on 4/10/2024.
Claims 1-13 are pending in this application. Claims 1, 3, and 7 are independent claims.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 4/10/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claims 1, 7, and 13 are objected to because of the following informalities:
Claim 1, remove “and” preceding “wherein” on the 5th and 6th lines and add “and” before “wherein” on the 8th line.
Claim 7, remove “and” preceding “wherein” on the 5th 6th line and add “and” before “wherein” on the 7th line.
Claim 13, replace “furth” with “fourth”.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the independent claims 1, 3, and 7 respectively recite a system comprising a single element, namely, a menu module, a move module, and a rotate module. All these modules are defined by the specifications to be software components performing certain functions [see pp. 5-6; see also the top paragraph of p. 3]. None of the dependent claims include any additional sort of hardware to resolve this issue to resolve this issue. Therefore, the claim elements are all characterized as software per se, which is not a “process,” a “machine,” a “manufacture,” or a “composition of matter” as defined in 35 U.S.C. § 101. Examiner respectfully suggests incorporating some elements of hardware devices such as a Head-Mounted Display and sensors, etc. by explicitly reciting these elements as parts of the display system in order to overcome the rejection.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 4-13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Independent claim 4 recites “… a position difference between the VCO's x, y, and z-axis is appended to the existing position of the VTO and moved accordingly.”
Examiner respectfully notes that this phrase, as recited, has been found to be confusing and failing to particularly point out and distinctly claim the subject matter which the inventor regards as the invention. By consulting the disclosure, the position difference seems to be before and after a movement of the VCO and the VTO seems to be moved accordingly [see fig. and the corresponding description].
Claim 4 additionally recites the term “optimum” which is a relative term and thus renders the claim indefinite. The term “optimum” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Thus, one of ordinary skill in the art would not be reasonably apprised of the scope of “an optimum arm’s reach distance” recited by the claim.
Accordingly, dependent claims 5-6 are also being rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph.
For purposes of art rejection, Examiner interprets the above-indicated claim portion as “… a position difference between the VCO's x, y, and z-axis before and after a movement of the VCO is appended to the existing position of the VTO and the VTO is moved accordingly.”
Claim 6 additionally recites “the speed multiplier” which is not supported by claim 3 upon which claim 6 depends. Claim 6 is thus interpreted to depend on claim 5 for proper antecedent basis.
Independent claim 7 recites “… rotates the VTO around a predetermined pivot point by a specified degree based on a user's first or second gesture, … wherein the rotation is performed around an X, Y, or Z axis based on the first or second gesture”. Examiner notes that the rotation of the VTO can either be around a predetermined pivot point or around an axis, but not both.
Accordingly, dependent claims 8-13 are also being rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph.
For purposes of art rejection, Examiner interprets the above-indicated claim portion as “… rotates the VTO by a specified degree based on a user's first or second gesture, … wherein the rotation is performed around an X, Y, or Z axis based on the first or second gesture”.
Claim 10 additionally recites the term “briefly” which is a relative term and thus renders the claim indefinite. The term “briefly” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Thus, one of ordinary skill in the art would not be reasonably apprised of the scope of “appear briefly” recited by the claim.
Examiner Comments
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim 7 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sun, US PGPUB 2023/0195236 Al (hereinafter as Sun).
Regarding independent claim 7, Sun discloses a system for manipulating a virtual target object (VTO) in a virtual environment [see fig. 1 and [0026] indicating a head-mounted display providing AR/VR services; note in [0073] the object 510 in the virtual world] comprising a rotate module that can be activated by performing a first gesture or a second gesture when the rotate module is inactive and that rotates the VTO by a specified degree based on a user's first or second gesture, wherein the rotation will stop when a user releases the first or the second gesture or when a maximum degree of rotation is reached, and wherein the rotation is performed around an X, Y, or Z axis based on the first or second gesture [see fig. 7 and [0073]-[0076]; note the rotation of object 510 according to the translation of the hand (a gesture), the rotation activated by the gesture, and the stopping upon the release of that particular gesture; note that the degree of rotation can be related to the translation magnitude and that the rotation is along rotation axis 502 which can be aligned with any of the x-, y-, or z- axes with no loss of generality].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over NOGUCHI, US PGPUB 2017/0294048 Al (hereinafter as Noguchi) in view of Ravasz et al., US Patent No. 10,890,983 B2 (hereinafter as Ravasz).
Regarding independent claim 1, Noguchi teaches a system for manipulating a virtual target object (VTO) in a virtual environment [note the system of fig. 1 for use in a virtual environment; note in [0007] the manipulation of a menu object in the visual field of the camera; note that all displayed items can be virtual target objects], said system comprising a menu module that can be activated by a user performing a first gesture or second gesture when the menu module is inactive, said system displaying said menu in response to said first or second gesture, wherein the menu will be hidden if the first gesture or the second gesture is performed again while the menu module is active [note in [0121] the display of a menu (activation) responsive to a certain movement of a hand of a user satisfying a certain condition and the hiding of a displayed menu when a movement of a hand satisfies another condition; Examiner notes that nothing precludes the use of the same condition for both activation/display and hiding of a menu; Examiner also notes the detection of movement or states of fingers of a user’s hand].
Noguchi does not explicitly teach that once the menu is active, a user can interact with the menu by tapping or swiping to select or activate options from the menu, wherein the system provides visual or haptic feedback to the user during menu interaction.
Ravasz teaches that a user can interact with an active menu by tapping or swiping to select or activate options from the menu, and providing visual or haptic feedback to the user during menu interaction [note in col. 17, lines 36-51 indicating highlighting menu items as a user performs a sliding gesture as an indication that the highlighted item will be selected upon performing a selection gesture; see fig. 7B and note highlighted menu item 708].
It would have been obvious to one of ordinary skill in the art having the teachings of the Noguchi and Ravasz before the effective filing date of the claimed invention to modify Noguchi’s virtual environment menu by specifying that once the menu is active, a user can interact with the menu by tapping or swiping to select or activate options from the menu, and by specifying providing visual or haptic feedback to the user during menu interaction, as per the teachings of Ravasz. The motivation for this obvious combination of teachings would be to facilitate distinguishing menu items that are candidate for selection during user interaction, as in the example provided by Ravasz [again see e.g. col. 17, lines 36-51 and fig. 7B] which would enhance Noguchi’s menu interaction.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over NOGUCHI in view of Ravasz, as applied to claim 1, and further n view of Antoniac et al., US PGPUB 2016/0357263 A1 (hereinafter as Antoniac).
Regarding claim 2, the rejection of claim 1 is incorporated. Noguchi further teaches that the first gesture comprises a gesture of the user’s left hand, and the second gesture comprises a gesture of the user’s right hand [note in [0121] that the movement can be of the right hand or left hand of the user].
The previously combined art, however, does not explicitly teach the first and second gesture comprising a thumbs-up gesture.
Antoniac teaches a predefined activation gesture that is a thumbs-up gesture [see e.g. [0193] and the right portion of fig. 5].
All of the features included in the claim limitations are known on Noguchi and Antoniac. The only difference is the combination of these different features into a single application. Thus, it would have been obvious to one of ordinary skill in the art having the teachings of Noguchi and Antoniac before the effective filing date of the claimed invention to explicitly specify that the first gesture comprises a thumbs-up gesture of the user’s left hand, and the second gesture comprises a thumbs-up gesture of the user’s right hand by combining Noguchi’s teaching of using one hand or the other and Antoniac’s teaching of using a thumbs-up gesture for activating a function, since these features are mere additional features that are in-line with one another. Therefore, the result of the combination would be predictable to one of ordinary skill in the art, also allowing for using single-handed distinct gestures to be recognized to activate functional features of the system. Therefore, it would have been obvious to combine Noguchi and Antoniac to arrive at the claimed invention.
See MPEP 2141. II. (A)
Claims 3 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over “Caputo, Fabio M., Marco Emporio, and Andrea Giachetti. "The Smart Pin: An effective tool for object manipulation in immersive virtual reality environments." Computers & Graphics 74 (2018): 225-233” (hereinafter as Caputo) in view of FAABORG et al., US PGPUB 2018/0024623 Al (hereinafter as Faaborg).
Regarding independent claim 3, Caputo teaches system for manipulating a virtual target object (VTO) in a virtual environment [see the title and figs. 1-2] comprising a move module that creates a virtual control object (VCO) having an x, y and z axis and which can be shaped in any form and textured with any material, wherein said VCO initially appears when the move module is initiated by pressing a button or performing a hand gesture [note the creation of the smart pin widget which is a 3D virtual object and thus has x, y, and z axes and has a shape and texture as shown in fig. 2; see also the description in the first paragraph under Section 3 on the left column on p. 227; note that the pin appears responsive to a user’s hand approaching the target object (the cube) which is a hand gesture].
Caputo does not explicitly teach that the VCO initially appears at a location relative to a user's camera position and that said location is determined by analyzing a user's headset height from a floor position and an optimum arm's reach distance.
Faaborg teaches the display of virtual control elements [see e.g. lines 10-11 of [0033]] at a location relative to a user's camera position and that said location is determined by analyzing a user's headset height from a floor position and an optimum arm's reach distance [see [0033] indicating user height and arm span; see also [0035]; [0043]] and [0047]-[0048]; note height from a floor position in [0051]].
It would have been obvious to one of ordinary skill in the art having the teachings of the Caputo and Faaborg before the effective filing date of the claimed invention to modify Caputo’s virtual control object positioning by specifying that the VCO initially appears at a location relative to a user's camera position and that said location is determined by analyzing a user's headset height from a floor position and an optimum arm's reach distance, as per the teachings of Faaborg. The motivation for this obvious combination of teachings would be to adapt the virtual control elements positioning to match a range of motion associated with the virtual environment including physical aspects of the user, as suggested by Faaborg [see e.g. [0006]; see also [0043]].
Regarding claim 4, the rejection of claim 3 is incorporated. Caputo further teaches that a position difference between the VCO's x, y, and z-axis before and after a movement of the VCO is appended to the existing position of the VTO and the VTO is moved accordingly [note in fig. 3 (a) the movement of the virtual object (the cube) by appending the relative movement of the smart pin object as also described in 3.1 across pp. 227-228].
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Caputo in view of Faaborg, as applied to claim 3, and further in view of Narita et al., US PGPUB 2017/0264881 Al (hereinafter as Narita).
Regarding claim 5, the rejection of claim 3 is incorporated. Caputo further teaches that when the VCO is rotated in any direction, the VTO mimics the same rotation [note in fig. 3 (b) the rotation of the virtual object (the cube) mimicking the rotation of the smart pin object as also described in 3.2 on p.228].
The previously combined art, however, does not explicitly teach that the system applies a speed multiplier to accelerate or decelerate the speed of the movement of the VTO based on the distance between the user and the VTO at the moment of action.
Narita teaches applying a speed multiplier to accelerate or decelerate the speed of the movement of a virtual object based on a distance between the user and the object [see claim 1 and note moving an object away from a user at a speed that increases as the object gets farther away from the user].
It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Narita before the effective filing date of the claimed invention to modify the movement of the virtual object is Caputo’s framework by explicitly specifying that the system applies a speed multiplier to accelerate or decelerate the speed of the movement of the VTO based on the distance between the user and the VTO at the moment of action, as per the teachings of Narita. The motivation for this obvious combination of teachings would be to control the speed of animation, as suggested by Narita [see e.g. [0071]].
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Caputo in view of Faaborg and Narita, as applied to claim 5, and further in view of Roard et al., US Patent No. 11,449,212 B2 (hereinafter as Roard).
Regarding claim 6, the rejection of claim 5 is incorporated. Narita further teaches that the speed multiplier increases the farther the object moves away from the user [again, see claim 1 and note moving an object away from a user at a speed that increases as the object gets farther away from the user].
The previously combined art, however, does not explicitly teach that speed multiplier increases linearly the farther the VTO moves away from the user until a maximum predetermined threshold is reached.
Roard teaches a movement of a virtual object whose speed increases linearly until a maximum predetermined threshold is reached [note in col. 11, lines 20-27 the constant acceleration (which is a linear increase in speed) of a movement of a UI element until a maximum threshold is reached].
It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Roard before the effective filing date of the claimed invention to modify the movement of the virtual object is Caputo’s framework that has been modified by the teaching of Narita to incorporate a speed multiplier that increases the farther the object moves away from the user by further explicitly specifying that speed increases linearly until a maximum predetermined threshold is reached, as per the teachings of Roard. The motivation for this obvious combination of teachings would be to enable developers to create smooth animations for controlled movement of virtual objects by specifying certain movement characteristics, as suggested by Roard [see e.g. col. 1, lines 41-45 and col. 4, lines 20-25].
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Sun (as applied to claim 7) in view of Ravasz.
Regarding claim 8, the rejection of claim 7 is incorporated. Although Sun teaches an activating gesture [see the rejection of claim 7], Sun does not explicitly teach that the first gesture comprises the user’s left index finger positioned and rotating vertically, and the second gesture comprises the user’s right index finger positioned and rotating vertically.
Ravasz teaches a predefined activation gesture that comprises a specific orientation and rotation of a specific finger/digit in a specific hand [see e.g. col. 12, lines 47-51 and col. 12, 57-col. 13, line 7; col. 6, lines 48-52] which includes a first gesture comprising the user’s left index finger positioned and rotating vertically, and a second gesture comprising the user’s right index finger positioned and rotating vertically.
All of the features included in the claim limitations are known on Sun and Ravasz. The only difference is the combination of these different features into a single application. Thus, it would have been obvious to one of ordinary skill in the art having the teachings of Sun and Ravasz before the effective filing date of the claimed invention to explicitly specify that the first gesture comprises the user’s left index finger positioned and rotating vertically, and the second gesture comprises the user’s right index finger positioned and rotating vertically by combining Sun’s teaching of an activating gesture and Ravasz’ teaching of using any of assigned gesture library entries including gestures that comprise a specific orientation and rotation of a specific finger/digit in a specific hand, since these features are mere additional features that are in-line with one another. Therefore, the result of the combination would be predictable to one of ordinary skill in the art, also allowing for using single-handed distinct gestures to be recognized to activate functional features of the system. Therefore, it would have been obvious to combine Sun and Ravasz to arrive at the claimed invention.
See MPEP 2141. II. (A)
Claims 9, 11, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Sun (as applied to claim 7) in view of Ng et al., US Patent No. 11,734,899 B2 (hereinafter as Ng).
Regarding claim 9, the rejection of claim 7 is incorporated. Sun does not explicitly teach a scale module that can be activated by performing a first scale gesture, a second scale gesture, a third scale gesture or a fourth scale gesture, and that displays a scale menu showing the scale factor of the VTO at a location relative to a user's camera position.
Ng teaches a scale module that can be activated by performing one of a plurality of scale gestures, and that displays a scale menu indicative of a scale factor of a virtual target object (VTO) at a location relative to a user’s camera position [note in col. 8, lines 49-55 indicating scaling a model (VTO) responsive to hand gestures and note the options of increasing or decreasing the size which indicates at least one different gesture for each; see also col. 10, lines 14-18; note from fig. 8, 840 and 845 the display of a scale indicative of a scale factor, as also described in col. 14, lines 1-2; note in col. 3, lines 10-17 describing the commands being performed based on detection through the camera in the user’s headset and corresponding conversions which indicates that displayed items are positioned relative to the camera location].
Ng further teaches a scale factor [again, note in col. 8, lines 53-55 describing a 180% increase in size].
It would have been obvious to one of ordinary skill in the art having the teachings of the Sun and Ng before the effective filing date of the claimed invention to modify Sun’s virtual environment manipulation by specifying a scale module that can be activated by performing a first scale gesture, a second scale gesture, a third scale gesture or a fourth scale gesture (Examiner notes that any arbitrary number would work), and that displays a scale menu indicative of the scale factor of the VTO at a location relative to a user's camera position, as per the teachings of Ng, and to further specify that the scale menu shows the scale factor taught by Ng. The motivation for this obvious combination of teachings would be to expand the interactions enabled with the VTO through hand gestures to include scaling, which would be useful to extend the virtual interactions to allow for more sophisticated modeling, as suggested by Ng [see e.g. col. 8, lines 1-7]. It would have been also obvious to explicitly annotate the model to display the scale factor to further facilitate grasping the effect of scaling.
Regarding claim 11, the rejection of claim 9 is incorporated. The previously combined art teaches if the first scale gesture or the second scale gesture is performed longer than 1 second, the system displays the scale menu showing the scale factor and simultaneously starts to decrease the size of the VTO in every axis proportionally [Sun further teaches in [0043] and [0057] the realization of a gesture that has been maintained for more than a time threshold such as 1 second; refer to the rejection of claim 9 for displaying the scale menu showing the scale factor; Ng further teaches in col. 8, line 65 – col. 9, line 2 the decrease in size maintaining the proportions along each dimension; again note in col. 8, lines 49-55 of Ng scaling a model (VTO) responsive to hand gestures and note the options of increasing or decreasing the size which indicates at least one different gesture for each].
Regarding claim 12, the rejection of claim 9 is incorporated. The previously combined art further teaches if the third scale gesture or the fourth scale gesture is performed longer than 1 second, the system displays the scale menu showing the scale factor and simultaneously starts to increase the size of the VTO in every axis proportionally [Sun further teaches in [0043] and [0057] the realization of a gesture that has been maintained for more than a time threshold such as 1 second; refer to the rejection of claim 9 for displaying the scale menu showing the scale factor; Ng further teaches in col. 8, lines 53-55 the increase in size /scale relative to the same car which indicates maintaining the proportions along each dimension; again note in col. 8, lines 49-55 of Ng scaling a model (VTO) responsive to hand gestures and note the options of increasing or decreasing the size which indicates at least one different gesture for each].
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of Ng, as applied to claim 9, and further in view of THORSANDER et al., US PGPUB 2013/0227482 Al (hereinafter as Thorsander).
Regarding claim 10, the rejection of claim 9 is incorporated. Although Sun teaches a gesture performed for less than 1 second [see in [0057] the realization of a gesture that has not been maintained for more than a time threshold; note the 1-second example for the threshold in [0043]], Sun does not explicitly teach that if one of the scale gestures is performed for less than 1 second, the scale menu will appear briefly and disappear.
Thorsander teaches if a gesture is performed for less than a certain threshold, a menu will appear and disappear [see [0051] indicating the ceasing of displaying of a sidebar when a short press gesture is released before it becomes a long press].
It would have been obvious to one of ordinary skill in the art having the teachings of the Sun, Ng, and Thorsander, before the effective filing date of the claimed invention, to modify Sun’s virtual environment manipulation that teaches recognizing a gesture performed for less than 1 second and has been modified by Ng to accept one of scale gestures that display a scale menu, by further specifying that the scale module will appear briefly and disappear if one of the scale gestures is performed for less than 1 second, as per the teachings of Thorsander. The motivation for this obvious combination of teachings would be to enable ceasing the display in cases where the user has unintentionally performed a short gesture as suggested by Thorsander [see e.g. [0051]].
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of Ng, as applied to claim 9, and further in view of Ravasz.
Regarding claim 13, the rejection of claim 9 is incorporated. Although Sun teaches activating gestures [see the rejection of claim 7], Sun does not explicitly teach that the first, second, third or fourth scale gestures comprises the user’s left index and middle fingers positioned and rotating horizontally.
Ravasz teaches a predefined activation gesture that comprises a specific orientation and rotation of a specific combination of fingers/digits in a specific hand [see e.g. col. 12, lines 47-51 and col. 12, 57-col. 13, line 7; col. 6, line 14-18 and lines 48-52] which includes a gesture comprising the user’s left index and middle fingers positioned and rotating horizontally.
All of the features included in the claim limitations are known on Sun and Ravasz. The only difference is the combination of these different features into a single application. Thus, it would have been obvious to one of ordinary skill in the art having the teachings of Sun and Ravasz before the effective filing date of the claimed invention to explicitly specify that the first, second, third or fourth scale gestures comprises the user’s left index and middle fingers positioned and rotating horizontally by combining Sun’s teaching of an activating gesture and Ravasz’ teaching of using any of assigned gesture library entries including gestures that comprise a specific orientation and rotation of a specific combination of fingers/digits in a specific hand, since these features are mere additional features that are in-line with one another. Therefore, the result of the combination would be predictable to one of ordinary skill in the art, also allowing for using single-handed distinct gestures to be recognized to activate functional features of the system. Therefore, it would have been obvious to combine Sun and Ravasz to arrive at the claimed invention.
See MPEP 2141. II. (A)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Examiner notes from the cited art, Mlyniec et al., US PGPUB 2013/0104083 A1, which teaches manipulating virtual target objects in a virtual environment by manipulating physical handles/controllers [see figs. 1-5].
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIA S AYAD whose telephone number is (571)272-2743. The examiner can normally be reached Monday-Friday, 7:30 am - 4:30 pm. Alt, Friday, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARIA S AYAD/Primary Examiner, Art Unit 2172