DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This Office Action is in response to the Amendment filed on 01/06/2026.
3. The IDS submitted on 01/06/2026 is considered and entered into the application file.
4. Claims 1-6, 20-26, 28-29, and 31-33 are pending.
Response to Arguments
5. Applicant’s arguments with respect to the pending claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
6. Claims 1-4, 20-22, and 31-33 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Honchariw et al (US 20190357497 A1).
Honchariw et al (“Honchariw”) is directed to method for autonomously training an animal to respond to oral commands.
As per claim 1, Honchariw discloses an apparatus, comprising: one or more sensors configured to generate signals indicative of at least one of a spatial position and an orientation of a user (as shown in Fig. 2, [0025] the training apparatus 100 can track the position of the dog's mouth in the 3D working field and adjust both the horizontal position actuator and the vertical orientation sensor such that a primary reinforcer unit ejected by the dispenser follows a trajectory that intersects the mouth of an animal in the working field, thereby further reducing a time from the training apparatus 100 detecting the animal entering a target pose and consumption of a reinforcement by the animal); and
one or more processors configured to receive the signals ([0098] As the training apparatus tracks the animal, the processor identifies postures of the dog that indicate the dog is entering the target pose), the one or more processors configured to execute instructions for:
determining that at least a portion of the user has intersected a defined virtual spatial region or that the user has assumed a defined orientation ([0040] the training apparatus 100 can rapidly determine whether the dog has entered a target position specified in the current training protocol and thus minimize a time to detect that the dog has responded to a command);
generating, based upon the determining, one or more output signals ([0010] During an autonomous training protocol with the animal, if the processor determines that a pose of the animal sufficiently matches a target position, that a pose of the animal sufficiently aligns with a target pose corresponding to a tone or oral command (hereinafter an “audible cue”) replayed through the speaker), and
interfacing with an external application to provide real-time feedback to the user based on the spatial position of the user or the orientation of the user ([0010] During an autonomous training protocol with the animal, if the processor determines that a pose of the animal sufficiently matches a target position, that a pose of the animal sufficiently aligns with a target pose corresponding to a tone or oral command (hereinafter an “audible cue”) replayed through the speaker).
As per claim 2, Honchariw further discloses that the apparatus of claim 1, wherein the user is an animal ([0010] Generally, the first method S100 can be executed by an apparatus 100 to train an animal to respond to oral commands (and/or other visible or audible signals)).
As per claim 3, Honchariw further discloses that the apparatus of claim 1, further comprising:
one or more components configured to store data, the data including sound data corresponding to one or more prerecorded sounds ([0079] The training apparatus then stores these audio clips locally and uploads to these audio clips to cloud storage. The audio clips can then be downloaded to additional training apparatuses, to mobile devices running the native application, or to web-based applications accessible via login on any appropriate device); and
one or more speakers ([0010] a speaker configured to output audible cues)
the instructions further including instructions for:
selecting at least one of the one or more prerecorded sounds corresponding to the defined virtual spatial region ([0065] prompting the user to record a first audio clip of the user reciting a voice command associated with a target pose within the training protocol, … playing back the first audio clip via an audio driver integrated into the training apparatus in Block S230; in the video feed, detecting a current pose of the animal in Block S240 ); and
generating, based on the selecting, an output signal based upon the sound data corresponding to the at least one of the one or more prerecorded sounds ([0138] The speaker may also be used to output sounds to mask other sounds generated by the training apparatus that may cause sound sensitive animals to disengage with the training apparatus. For example, clicks and whirring generated by the treat reloading mechanisms may frighten certain animals),
the one or more speakers being configured to produce sound including the at least one of the one or more prerecorded sounds in response to the output signal ([0070] After an initial acclimation period, the training apparatus plays back the user's “sit” voice command using the speakers while the animal is not in the sit pose, and if the dog engages in the desired behavior within a set time-increment after the playback of the voice command, the dispenser dispenses a primary reinforcer to the dog and the speaker plays back a secondary reinforcer, e.g. “good dog” recorded in the user's voice. Also see [0083]).
As per claim 4, Honchariw further discloses that the apparatus of claim 3, wherein the one or more prerecorded sounds comprise phonetic sounds ([[0054] Furthermore, as the dog increases a frequency and speed of entering the “sit” pose following output of the audible tone by the training apparatus 100, the training apparatus 100 can transition into outputting a prerecorded oral command—spoken by the dog's owner, as described above—for the “sit” command, such as by replaying a “sit” soundbite before, during, or after outputting the audible tone. Also see [0028, 0031, [0056, 0057]).
As per claim 20, Honchariw further discloses that the apparatus of claim 1, further comprising: one or more visual feedback components configured to produce visual indicators, the instructions further including instructions for generating, based on the determining, an output signal configured to activate the one or more visual feedback components, and the one or more visual feedback components configured to generate visual feedback in response to the output signal ([0069] at the start of a training session, the training apparatus: loads a training protocol for a sit command; displays a visual cue to indicate to the animal that a training session has begun; (2.4 Visual Cue Initialization - [0094] Block S224 of the second method S200 recites: in response to detecting the animal in the working field, activating a visual cue for the duration of the first training session. Generally, in Block S224, the training apparatus accesses the optical sensors and scans the working field for the animal. Upon detection of the animal, the training apparatus displays a visual cue with the visual display to indicate to the animal that a training session is active. For example, the visual display can activate an LED array depicting the shape of a bone during training sessions. The training apparatus may also display the visual cue at the start of a training session when the animal is not detected in the working field).
As per claim 21, Honchariw further discloses that the apparatus of claim 1, wherein: the one or more sensors include a camera configured to capture images or video data of an environment of the user ([0034]The training apparatus 100 (or the remote computer system) can then tune an animal model to detect the dog in a color image—recorded by the color camera during a subsequent training protocol—based on these highest-frequency colors more representative of the dog's coat); and
the instructions further include instructions for processing the images or video data to enhance the determining that the user has intersected the defined virtual spatial region or that the user has assumed the defined orientation ([0035] The native application (or the remote computer system) can also estimate a size of the dog from the image selected by the user or otherwise prompt the user to indicate a size of the dog, such as in length or height dimensions or in weight, and then store this size value in the dog's profile. The training apparatus 100 (or the remote computer system) can then tune an animal model to detect the dog in a depth image—recorded by the depth camera during a subsequent training protocol—based on a size of the dog).
As per claim 22, Honchariw further discloses that the apparatus of claim 1, further comprising:
one or more components configured to store data, the data including visual data corresponding to one or more predefined visual cues ([0014] As shown in FIG. 2, the training apparatus 100 can include: a suite of optical sensors configured to record images (e.g., color and/or depth images) of a field ahead of the training apparatus 100 (hereinafter a “working field”. [0027] In one variation, the training apparatus 100 also includes a visual display—such as an LED array—configured to output visual cues to the animal. [0084] In one variation, the processor scans frames of the video feed for target markers associated with certain poses of the animal. The markers are defined by the processor based on predefined markers within exemplary poses stored locally on the training apparatus.); and
one or more displays ([0027] In one variation, the training apparatus 100 also includes a visual display—such as an LED array—configured to output visual cues to the animal),
the instructions further include instructions for:
selecting at least one of the one or more predefined visual cues corresponding to the defined virtual spatial region ([0034] In another example, the native application (or a remote computer system) can: extract visual characteristics of the dog from the image selected by the user, such as by extracting frequencies (e.g., rates of recurrence, a histogram) of colors present in a region of the image confirmed by the user as representing the dog); and
generating, based on the selecting, an output signal based upon the visual data corresponding to the at least one of the one or more predefined visual cues ([0069] at the start of a training session, the training apparatus: loads a training protocol for a sit command; displays a visual cue to indicate to the animal that a training session has begun);
the one or more displays being configured to produce a visual output including the at least one of the one or more predefined visual cues in response to the output signal. ([0027] In one variation, the training apparatus 100 also includes a visual display—such as an LED array—configured to output visual cues to the animal. In this variation, the system can implement methods and techniques similar to those described below to output a visual queue corresponding to a particular command and target response and to selectively dispense reinforcement to an animal (e.g., in the form of a treat) when the training apparatus 100 detects that the animal has completed the target response. [0034-0035] and [0042]).
As per claim 31, Honchariw discloses that a non-transitory processor-readable medium having instructions stored thereon that, ([0141] computer-readable medium storing computer-readable instructions), when executed by one or more processors (processor of Fig. 2) , cause the one or more processors to:
receive, from one or more sensors, signals indicative of at least one of a spatial position and an orientation of a user ([0010] a processor configured to implement computer vision and/or artificial intelligence techniques to interpret positions and poses of the animal within the field from images recorded by the optical sensors);
determine that at least a portion of the user has intersected a defined virtual spatial region or that the user has assumed a defined orientation ([0061] throughout a training routine, the training apparatus 100 can thus: record a color image and/or depth image of the working field; extract the position and orientation of the dog at the ground plane from the color image and/or depth image);
generate, based upon the determining, one or more output signals; and integrate with an augmented reality system to overlay visual indicators of the defined virtual spatial region or the defined orientation within a field of view of the user ([0061] When the training apparatus 100 then determines that the dog has properly responded to the current command based on a position and orientation of the dog in a later color image and/or depth image, the training apparatus 100 can immediately trigger the dispenser actuator at the last calculated target speed, thereby dispensing a primary reinforcer unit at or near the dog's feet. [0087] In another example, if in a first frame the markers for the dog's paws are below the markers for the dog's head and tail, and in a second frame the markers for the dog's paws are above the markers for the dog's head and tail, and in a third frame the markers for the dog's paws are once again below the markers for the dog's head and tail, the processor can confirm the dog to be rolling over. [0038] By accessing an animal model “tuned” to detecting presence and pose of animals exhibiting characteristics similar to those aggregated into the dog's profile during setup, the training apparatus 100 may detect the presence and orientation of the dog in the working field more quickly and with increased confidence).
As per claim 32, Honchariw discloses that a non-transitory processor-readable medium having instructions stored thereon that, ([0141] computer-readable medium storing computer-readable instructions), when executed by one or more processors (processor of Fig. 2), cause the one or more processors to:
receive, from one or more sensors, signals indicative of at least one of a spatial position and an orientation of a user; ([0010] a processor configured to implement computer vision and/or artificial intelligence techniques to interpret positions and poses of the animal within the field from images recorded by the optical sensors);
determine that at least a portion of the user has intersected a defined virtual spatial region or that the user has assumed a defined orientation; ([0061] throughout a training routine, the training apparatus 100 can thus: record a color image and/or depth image of the working field; extract the position and orientation of the dog at the ground plane from the color image and/or depth image. [0086] For example, if the training apparatus detects that the markers for each of the dog's paws and the marker for the dog's tail are all intersecting with the ground plane, while the markers for the dog's head are at a distance above the ground plane approximately equal to the dog's height, the system can confirm that the dog is in a sitting position. In another example, if the markers for the dog's paws, tail, and mouth are all intersecting with the ground plane, the processor can confirm that the dog is in a laying position.
generate, based upon the determining, one or more output signals; ([0010] During an autonomous training protocol with the animal, if the processor determines that a pose of the animal sufficiently matches a target position, that a pose of the animal sufficiently aligns with a target pose corresponding to a tone or oral command (hereinafter an “audible cue”) replayed through the speaker); and
interface with an external application to provide real-time feedback to the user based on the spatial position of the user or the orientation of the user. ([0010] During an autonomous training protocol with the animal, if the processor determines that a pose of the animal sufficiently matches a target position, that a pose of the animal sufficiently aligns with a target pose corresponding to a tone or oral command (hereinafter an “audible cue”) replayed through the speaker). 0012] For example, during a training protocol to teach a dog to respond to a “sit” command, the training apparatus 100 can detect the dog in the field and regularly update the position of the dispenser to align to the position of the dog in the field, such as at a rate of 30 Hz. [0038] By accessing an animal model “tuned” to detecting presence and pose of animals exhibiting characteristics similar to those aggregated into the dog's profile during setup, the training apparatus 100 may detect the presence and orientation of the dog in the working field more quickly and with increased confidence .Also see [0025
As per claim 33, Honchariw discloses that a non-transitory processor-readable medium ([0141] computer-readable medium storing computer-readable instructions), having instructions stored thereon that, when executed by one or more processors (processor of Fig. 2), cause the one or more processors to:
receive, from one or more sensors, signals indicative of at least one of a spatial position and an orientation of a user ([0025] In this variation, the training apparatus 100 can track the position of the dog's mouth in the 3D working field and adjust both the horizontal position actuator and the vertical orientation sensor such that a primary reinforcer unit ejected by the dispenser follows a trajectory that intersects the mouth of an animal in the working field, thereby further reducing a time from the training apparatus 100 detecting the animal entering a target pose and consumption of a reinforcement by the animal). Alsop see [0038, 0040], )
determine that at least a portion of the user has intersected a defined virtual spatial region or that the user has assumed a defined orientation ([0040] During a training protocol, the training apparatus 100 can regularly: record color and/or depth images of the working field; implement the animal model to detect a position and pose of the dog; and regularly update the position of the horizontal position actuator (and the vertical position actuator) to align the dispenser to the dog (e.g., to the center of the dog's front feet or to the dog's mouth) in real-time, such as at a rate of 30 Hz);
generate, based upon the determining, one or more output signals ([0040] Furthermore, by tracking the dog in the working field and updating positions of various actuators in the training apparatus 100 in real-time to align the output of the dispenser to the dog, the training apparatus 100 can immediately eject a primary reinforcer unit directly at the dog's feet and thus minimize a time from detecting that the dog has responded to a command to consumption of a reinforcement by the dog. [0056]) and
generate visual feedback through a display device in response to the determining, the visual feedback including graphical indicators of the defined virtual spatial region or the defined orientation ([0027] In one variation, the training apparatus 100 also includes a visual display—such as an LED array—configured to output visual cues to the animal. In this variation, the system can implement methods and techniques similar to those described below to output a visual queue corresponding to a particular command and target response and to selectively dispense reinforcement to an animal (e.g., in the form of a treat) when the training apparatus 100 detects that the animal has completed the target response).
7. Claims 23-26 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by
PRADEEP et al (US 20160302393 A1).
PRADEEP et al (“PRADEEP”) is directed to INTELLIGENT PET MONITORING SYSTEM
As per claim 23, PRADEEP discloses a method (e.g., flowchart of Fig. 6) , comprising:
receiving, at one or more processors and from one or more sensors, signals indicative of at least one of a spatial position and an orientation of a user ([0004] In one example, a system includes a pet monitoring device having a plurality of sensors that gather measurement data, such as motion and arousal, from a pet. [0005] In another example, a method includes receiving measurement data at a monitoring hub from sensors associated with a pet monitoring device), and
determining, at the one or more processors, that at least a portion of the user has intersected a defined virtual spatial region or that the user has assumed a defined orientation ([0024] For instance, the wearable pet monitoring device 111 can collect data regarding a pet's motions, orientation, and physiology. Also see [0039]);
generating, at the one or more processors and based upon the determining, one or more output signals ([0074] audio sensor 725 can also be used to gather data measurements associated with the pet's surroundings and environment. In addition, the audio sensor 725 can be used to gather data measurements about sounds from the pet, such as vocalizations, etc. ) and
integrating with a virtual reality (VR) or augmented reality (AR) system to provide immersive feedback based on the defined virtual spatial region or the defined orientation, the VR or AR system overlays within a field of view of the user virtual objects or indicators corresponding to the defined virtual spatial region or the defined orientation ( [0073] In some examples, a projector 711 can be included as part of the monitoring hub 701. For instance, a projector 711 can be included as part of a pet station and can be used to display lights or images for the pet to see. This feature can be useful to augment the environment with soothing lights, colors, or images. In some examples, this may be used to present learning content to the pet. [0045] Furthermore, data about a pet's environment, such as audio levels, time of day, location, etc. can also be considered. Additional data from one or more owners, such self-reporting and lesson feedback can also be considered. Also see [0084]).
As per claim 24, PRADEEP further discloses that the method of claim 23, wherein the user is an animal (pet 107, Fig, 1).
As per claim 25, PRADEEP further discloses that the method of claim 23, further comprising: dynamically adjusting the defined virtual spatial region based on contextual data, the contextual data includes at least one of environmental conditions, user preferences, and historical data ([0025] For instance, if a determination is made that environmental conditions are not suitable for a pet, the monitoring hub can make suggestions including ways to reduce noise, light intensity, visual clutter, etc. In particular, suggestions may include closing windows, turning off lights, reducing the amount of items in the room, etc.).
As per claim 26, PRADEEP further discloses that the method of claim 23, further comprising: using machine learning algorithms to improve an accuracy of the determining over time based on collected user data ([0029]In addition, the platform 115 performs machine learning on aggregated measurement data, sensor data, and any other development metrics to generate models that predict upcoming behaviors, developments, activities, etc., according to various examples. For instance, measurement data can be used to generate models based on patterns in activity, and these models can be used by particular pet monitoring systems to predict an upcoming activity. [0048] According to various embodiments, data is first collected about the pet, the data is scaled, and then a model or prediction is applied to the pet. Specifically, aggregated data can be collected at the platform, as described above with regard to FIG. 2, and models, predictions, etc. can be developed. These models, etc. can then be accessed from the platform by individual monitoring hubs. A particular pet monitoring system can then perform hub processing 321 that can use these models, etc. to analyze measurement data for a particular pet).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 28 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over PRADEEP et al in view of TROTTIER et al (US 20210337767 A1). Note: Trottier et al (“Trottier”) is a continuation of application No. PCT/US2020/064122, filed on Sep, 12, 2020.
TROTTIER is directed to devices for providing language-like abilities for dogs. Using polygonal layouts, audible, scent, visible and other cues, the devices provide an efficient and novel ability to communicate with dogs in a granular manner.
As per claim 28, PRADEEP further discloses that the method of claim 23, further comprising:
storing data, the data including sound data corresponding to one or more prerecorded sounds ([0068] According to various embodiments, monitoring hub 701 can provide data pre-processing, ambient sensing (local sensing of environment, vibration sensing, audio sensors, cameras), content cache, and/or pet status assessment). [0074] Audio sensor 725 can also be used to gather data measurements associated with the pet's surroundings and environment. In addition, the audio sensor 725 can be used to gather data measurements about sounds from the pet, such as vocalizations, etc.
But PRADEEP does not clearly teach selecting at least one of the one or more prerecorded sounds corresponding to the defined virtual spatial region; generating an output signal based on the sound data corresponding to the at least one of the one or more prerecorded sounds; and producing sound including the at least one of the one or more prerecorded sounds in response to the output signal.
TROTTIER is directed to use of semantic boards and semantic buttons for training and assisting the expression and understanding of language ([0391] In another approach, a set of virtual buttons can be projected on the ground. In a preferred embodiment, the dog would have a transmitter that would instruct the projector as to which virtual buttons to project in which location.
PRADEEP further discloses [0144] Semantic board or tile: As used herein, the term “semantic board” or “semantic tile” refers to a physical or digital object on which semantic buttons can be arranged. PRADEEP further discloses that that the method of claim 28, wherein the one or more prerecorded sounds include phonetic sounds ([0362] The button includes the features of the one presented in FIG. 1: a microphone, or set of microphones, for recording a sound (e.g., a person saying the dog's name). A way to trigger recording and storing of the sounds transduced by the microphone. A speaker for playing back the sound recorded at sufficiently high fidelity. A button that is easy to depress in order to trigger playback via the speaker.
PRADEEP further discloses generating an output signal based on the sound data corresponding to the at least one of the one or more prerecorded sounds ([0151] For example, the sound may come from the tile, another button, a mobile device, speakers, an alarm system, the dog's collar or other wearable, or other sound generator. [0362] The button includes the features of the one presented in FIG. 1: a microphone, or set of microphones, for recording a sound (e.g., a person saying the dog's name). A way to trigger recording and storing of the sounds transduced by the microphone. A speaker for playing back the sound recorded at sufficiently high fidelity. A button that is easy to depress in order to trigger playback via the speaker.
PRADEEP further discloses producing sound including the at least one of the one or more prerecorded sounds in response to the output signal ( [0151] The semantic boards accommodate sound-producing semantic buttons on a side of the semantic board, and the semantic boards may be placed horizontally, such as on a floor, vertically, such as on a wall, or sloped, such as on a tilted surface. The semantic boards are designed so that sound-causing semantic buttons can be securely affixed to the face shown to the learner).
Before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to incorporate the above teachings of TROTTIER with Pradeep so that pattern movement, orientation and/or position of the pet will be monitored and identified in a virtual environment when interacting or pointing toward virtual buttons or virtual symbols of TROTTIER.
Therefore, it would have been obvious to combine TROTTIER with PRADEEP to obtain the claimed invention.
As per claim 29, PRADEEP in view of TROTTIER further discloses that the method of claim 28, wherein the one or more prerecorded sounds include phonetic sounds (TROTTIER , [00386] For example, the system might play the sound corresponding to the word “outside” even though the button hadn't been physically triggered, and instead was “neurally triggered” because the dog generated the neural activity pattern associated with the pressing of the button for ‘outside’).
9. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Honchariw et al (US 20190357497 A1) in view of Marmen et al (US 20170135315 A1).
Marmen discloses ANIMAL WEARABLE DEVICES, SYSTEMS, AND METHODS
As per claim 5, Honchariw does not seem to teach “haptic feedback” , that is Honchariw fails to disclose “one or more haptic feedback components configured to produce haptic feedback, the instructions further including instructions for generating, based on the determining, an output signal configured to activate the one or more haptic feedback components, the one or more haptic feedback components being configured to generate first haptic feedback in response to the output signal”
Marmen, on the other hand, discloses wearable device (for example, collar 110-d). [0077] Device 100-d may include a collar 110-d, a computerized controller 130-d, and multiple stimulation components 120-m/120-n. The stimulation components 120-m/120-n be configured in different configurations on the collar 110-d. In some embodiments, the stimulation components 120-m/120-n include vibration components, e.g., vibration motors, that may use haptic feedback to control, protect, and/or train the animal, for example. In some embodiments, the haptic feedback may be used for controlling, protecting, and/or training purposes, as discussed in more detail below. Different vibration patterns, for example, may provide different cues to the animal. Different vibration patterns may provide more than a typical negative signal (e.g., a shock) to an animal. For example, the use of multiple vibration motors may be utilized to provide positive reinforcement, direction information, and/or other commands to an animal.
Before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to incorporate a wearable device (collar) with the dog of Honchariw so that the sensory devices on the collar stimulates and provides vibrations feedback (shock) and also provides positive reinforcement, direction information, and/or other commands to the pet of Honchariw.
Therefore, it would have been obvious to combine Marmen with Honchariw to obtain the invention as specified in claim 5.
Allowable Subject Matter
10. Claim 6 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
11. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kates (US 7424867 B2) discloses a computer-aided training and management system that uses a computer or other processor in wireless communication with an instrumented dog collar and/or optionally, one or more dog interaction devices, such as, for example, video monitors, loudspeakers, video cameras, training toys (e.g., ball, bone, moving toy, etc.), an animatronics "trainer," a treat dispenser, a food dispensing and monitoring device, a water dispensing and monitoring device, tracking devices, a dog door, dog-monitoring doghouse, a dog-monitoring dog toilet, is described. In one embodiment, the instrumented dog collar is in two-way communication with a central computer system.
12. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TADESSE HAILU whose telephone number is (571)272-4051; and the email address is Tadesse.hailu@USPTO.GOV. The examiner can normally be reached Monday- Friday 9:30-5:30 (Eastern time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bashore, William L. can be reached (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TADESSE HAILU/Primary Examiner, Art Unit 2174